Dec 16 03:14:02.045442 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:14:02.045520 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:14:02.045540 kernel: BIOS-provided physical RAM map: Dec 16 03:14:02.045552 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 03:14:02.045564 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 03:14:02.045577 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 03:14:02.045591 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 16 03:14:02.045611 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 16 03:14:02.045625 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:14:02.045638 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 03:14:02.045648 kernel: NX (Execute Disable) protection: active Dec 16 03:14:02.045664 kernel: APIC: Static calls initialized Dec 16 03:14:02.045674 kernel: SMBIOS 2.8 present. Dec 16 03:14:02.045685 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 16 03:14:02.045698 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:14:02.045706 kernel: Hypervisor detected: KVM Dec 16 03:14:02.045720 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 16 03:14:02.045729 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:14:02.045737 kernel: kvm-clock: using sched offset of 4221998739 cycles Dec 16 03:14:02.045746 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:14:02.045755 kernel: tsc: Detected 2494.136 MHz processor Dec 16 03:14:02.045765 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:14:02.045774 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:14:02.046010 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 16 03:14:02.046027 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 03:14:02.046039 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:14:02.046051 kernel: ACPI: Early table checksum verification disabled Dec 16 03:14:02.046063 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 16 03:14:02.046076 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:14:02.046090 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:14:02.046105 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:14:02.046121 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 16 03:14:02.046130 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:14:02.046144 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:14:02.046157 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:14:02.046169 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:14:02.046181 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Dec 16 03:14:02.046193 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Dec 16 03:14:02.046210 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 16 03:14:02.046220 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Dec 16 03:14:02.046233 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Dec 16 03:14:02.046241 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Dec 16 03:14:02.046250 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Dec 16 03:14:02.046262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 03:14:02.046271 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 03:14:02.046281 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Dec 16 03:14:02.046290 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Dec 16 03:14:02.046299 kernel: Zone ranges: Dec 16 03:14:02.046313 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:14:02.047492 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 16 03:14:02.047521 kernel: Normal empty Dec 16 03:14:02.047535 kernel: Device empty Dec 16 03:14:02.047549 kernel: Movable zone start for each node Dec 16 03:14:02.047562 kernel: Early memory node ranges Dec 16 03:14:02.047574 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 03:14:02.047583 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 16 03:14:02.047592 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 16 03:14:02.047608 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:14:02.047617 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 03:14:02.047626 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 16 03:14:02.047635 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 03:14:02.047649 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:14:02.047658 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:14:02.047670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 03:14:02.047683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:14:02.047692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:14:02.047704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:14:02.047713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:14:02.047722 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:14:02.047732 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 03:14:02.047741 kernel: TSC deadline timer available Dec 16 03:14:02.047757 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:14:02.048226 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:14:02.048247 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:14:02.048261 kernel: CPU topo: Max. threads per core: 1 Dec 16 03:14:02.048274 kernel: CPU topo: Num. cores per package: 2 Dec 16 03:14:02.048288 kernel: CPU topo: Num. threads per package: 2 Dec 16 03:14:02.048301 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 03:14:02.048314 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 03:14:02.048333 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 16 03:14:02.048346 kernel: Booting paravirtualized kernel on KVM Dec 16 03:14:02.048359 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:14:02.048372 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 03:14:02.048386 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 03:14:02.048400 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 03:14:02.048411 kernel: pcpu-alloc: [0] 0 1 Dec 16 03:14:02.048422 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 03:14:02.048434 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:14:02.048444 kernel: random: crng init done Dec 16 03:14:02.048453 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 03:14:02.048462 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 03:14:02.048471 kernel: Fallback order for Node 0: 0 Dec 16 03:14:02.048480 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Dec 16 03:14:02.048492 kernel: Policy zone: DMA32 Dec 16 03:14:02.048502 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:14:02.048516 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 03:14:02.050858 kernel: Kernel/User page tables isolation: enabled Dec 16 03:14:02.050883 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:14:02.050896 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:14:02.050908 kernel: Dynamic Preempt: voluntary Dec 16 03:14:02.050929 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:14:02.050944 kernel: rcu: RCU event tracing is enabled. Dec 16 03:14:02.050958 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 03:14:02.050972 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:14:02.050985 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:14:02.050998 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:14:02.051012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:14:02.051024 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 03:14:02.051038 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:14:02.051052 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:14:02.051062 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:14:02.051071 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 03:14:02.051081 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:14:02.051090 kernel: Console: colour VGA+ 80x25 Dec 16 03:14:02.051099 kernel: printk: legacy console [tty0] enabled Dec 16 03:14:02.051111 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:14:02.051120 kernel: ACPI: Core revision 20240827 Dec 16 03:14:02.051130 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 03:14:02.051147 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:14:02.051160 kernel: x2apic enabled Dec 16 03:14:02.051170 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:14:02.051179 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 03:14:02.051189 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Dec 16 03:14:02.051203 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Dec 16 03:14:02.051215 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 03:14:02.051225 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 03:14:02.051235 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:14:02.051248 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:14:02.051267 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:14:02.051280 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 03:14:02.051295 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 03:14:02.051309 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 03:14:02.051323 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 03:14:02.051339 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 03:14:02.051350 kernel: active return thunk: its_return_thunk Dec 16 03:14:02.051363 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 03:14:02.051372 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:14:02.051382 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:14:02.051392 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:14:02.051401 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:14:02.051411 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 03:14:02.051421 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:14:02.051433 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:14:02.051443 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:14:02.051452 kernel: landlock: Up and running. Dec 16 03:14:02.051462 kernel: SELinux: Initializing. Dec 16 03:14:02.051471 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:14:02.051481 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:14:02.051491 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 16 03:14:02.051504 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 16 03:14:02.051514 kernel: signal: max sigframe size: 1776 Dec 16 03:14:02.051523 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:14:02.051533 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:14:02.051543 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:14:02.051553 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 03:14:02.051562 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:14:02.051578 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:14:02.051588 kernel: .... node #0, CPUs: #1 Dec 16 03:14:02.051598 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 03:14:02.051607 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Dec 16 03:14:02.051618 kernel: Memory: 1983292K/2096612K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 108756K reserved, 0K cma-reserved) Dec 16 03:14:02.051628 kernel: devtmpfs: initialized Dec 16 03:14:02.051637 kernel: x86/mm: Memory block size: 128MB Dec 16 03:14:02.051647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:14:02.051659 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 03:14:02.051669 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:14:02.051679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:14:02.051688 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:14:02.051698 kernel: audit: type=2000 audit(1765854838.728:1): state=initialized audit_enabled=0 res=1 Dec 16 03:14:02.051707 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:14:02.051717 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:14:02.051731 kernel: cpuidle: using governor menu Dec 16 03:14:02.051740 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:14:02.051750 kernel: dca service started, version 1.12.1 Dec 16 03:14:02.051759 kernel: PCI: Using configuration type 1 for base access Dec 16 03:14:02.051769 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:14:02.051778 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:14:02.051839 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:14:02.051854 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:14:02.051864 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:14:02.051874 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:14:02.051884 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 03:14:02.051893 kernel: ACPI: Interpreter enabled Dec 16 03:14:02.051902 kernel: ACPI: PM: (supports S0 S5) Dec 16 03:14:02.051912 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:14:02.051925 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:14:02.051935 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 03:14:02.051944 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 03:14:02.051954 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:14:02.052235 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:14:02.052405 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 03:14:02.052572 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 03:14:02.052585 kernel: acpiphp: Slot [3] registered Dec 16 03:14:02.052596 kernel: acpiphp: Slot [4] registered Dec 16 03:14:02.052605 kernel: acpiphp: Slot [5] registered Dec 16 03:14:02.052615 kernel: acpiphp: Slot [6] registered Dec 16 03:14:02.052625 kernel: acpiphp: Slot [7] registered Dec 16 03:14:02.052635 kernel: acpiphp: Slot [8] registered Dec 16 03:14:02.052648 kernel: acpiphp: Slot [9] registered Dec 16 03:14:02.052657 kernel: acpiphp: Slot [10] registered Dec 16 03:14:02.052667 kernel: acpiphp: Slot [11] registered Dec 16 03:14:02.052677 kernel: acpiphp: Slot [12] registered Dec 16 03:14:02.052686 kernel: acpiphp: Slot [13] registered Dec 16 03:14:02.052695 kernel: acpiphp: Slot [14] registered Dec 16 03:14:02.052705 kernel: acpiphp: Slot [15] registered Dec 16 03:14:02.052718 kernel: acpiphp: Slot [16] registered Dec 16 03:14:02.052728 kernel: acpiphp: Slot [17] registered Dec 16 03:14:02.052737 kernel: acpiphp: Slot [18] registered Dec 16 03:14:02.052746 kernel: acpiphp: Slot [19] registered Dec 16 03:14:02.052756 kernel: acpiphp: Slot [20] registered Dec 16 03:14:02.052765 kernel: acpiphp: Slot [21] registered Dec 16 03:14:02.052775 kernel: acpiphp: Slot [22] registered Dec 16 03:14:02.054542 kernel: acpiphp: Slot [23] registered Dec 16 03:14:02.054580 kernel: acpiphp: Slot [24] registered Dec 16 03:14:02.054591 kernel: acpiphp: Slot [25] registered Dec 16 03:14:02.054602 kernel: acpiphp: Slot [26] registered Dec 16 03:14:02.054612 kernel: acpiphp: Slot [27] registered Dec 16 03:14:02.054621 kernel: acpiphp: Slot [28] registered Dec 16 03:14:02.054631 kernel: acpiphp: Slot [29] registered Dec 16 03:14:02.054641 kernel: acpiphp: Slot [30] registered Dec 16 03:14:02.054654 kernel: acpiphp: Slot [31] registered Dec 16 03:14:02.054664 kernel: PCI host bridge to bus 0000:00 Dec 16 03:14:02.056991 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:14:02.057168 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:14:02.057321 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:14:02.057446 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 03:14:02.057598 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 16 03:14:02.057715 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:14:02.057907 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:14:02.058052 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:14:02.058238 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Dec 16 03:14:02.058478 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Dec 16 03:14:02.058665 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 16 03:14:02.060534 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 16 03:14:02.065102 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 16 03:14:02.065339 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 16 03:14:02.065599 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 16 03:14:02.065764 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Dec 16 03:14:02.065948 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 03:14:02.066085 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 16 03:14:02.066218 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 16 03:14:02.066363 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 16 03:14:02.066507 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Dec 16 03:14:02.066636 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Dec 16 03:14:02.066765 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Dec 16 03:14:02.068005 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Dec 16 03:14:02.068166 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 03:14:02.068351 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:14:02.068522 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Dec 16 03:14:02.068664 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Dec 16 03:14:02.068826 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Dec 16 03:14:02.068971 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:14:02.069101 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Dec 16 03:14:02.069238 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Dec 16 03:14:02.069380 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 16 03:14:02.069577 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:14:02.069727 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Dec 16 03:14:02.070595 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Dec 16 03:14:02.073960 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 16 03:14:02.074212 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:14:02.074350 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Dec 16 03:14:02.074483 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Dec 16 03:14:02.074614 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Dec 16 03:14:02.074768 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:14:02.074918 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Dec 16 03:14:02.075055 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Dec 16 03:14:02.075224 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Dec 16 03:14:02.075387 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 03:14:02.075521 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Dec 16 03:14:02.075657 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 16 03:14:02.075676 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:14:02.075687 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:14:02.075697 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:14:02.075708 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:14:02.075718 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 03:14:02.075728 kernel: iommu: Default domain type: Translated Dec 16 03:14:02.075739 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:14:02.075752 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:14:02.075762 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:14:02.075772 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 03:14:02.077534 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 16 03:14:02.077930 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 16 03:14:02.078174 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 16 03:14:02.078383 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 03:14:02.078418 kernel: vgaarb: loaded Dec 16 03:14:02.078433 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 03:14:02.078448 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 03:14:02.078462 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:14:02.078476 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:14:02.078491 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:14:02.078506 kernel: pnp: PnP ACPI init Dec 16 03:14:02.078525 kernel: pnp: PnP ACPI: found 4 devices Dec 16 03:14:02.078540 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:14:02.078553 kernel: NET: Registered PF_INET protocol family Dec 16 03:14:02.078569 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 03:14:02.078585 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 03:14:02.078599 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:14:02.078614 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:14:02.078634 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 03:14:02.078649 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 03:14:02.078683 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:14:02.078707 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:14:02.078735 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:14:02.078754 kernel: NET: Registered PF_XDP protocol family Dec 16 03:14:02.080939 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:14:02.081202 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:14:02.081402 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:14:02.081626 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 03:14:02.082949 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 16 03:14:02.083232 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 16 03:14:02.083451 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 03:14:02.083492 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 03:14:02.083701 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 25582 usecs Dec 16 03:14:02.083724 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:14:02.083741 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 03:14:02.083758 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Dec 16 03:14:02.083774 kernel: Initialise system trusted keyrings Dec 16 03:14:02.084865 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 03:14:02.084900 kernel: Key type asymmetric registered Dec 16 03:14:02.084917 kernel: Asymmetric key parser 'x509' registered Dec 16 03:14:02.084933 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:14:02.084950 kernel: io scheduler mq-deadline registered Dec 16 03:14:02.084967 kernel: io scheduler kyber registered Dec 16 03:14:02.084984 kernel: io scheduler bfq registered Dec 16 03:14:02.085000 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:14:02.085022 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 16 03:14:02.085039 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 16 03:14:02.085055 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 16 03:14:02.085072 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:14:02.085089 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:14:02.085105 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:14:02.085120 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:14:02.085139 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:14:02.085155 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:14:02.085473 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 03:14:02.085697 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 03:14:02.085925 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T03:14:00 UTC (1765854840) Dec 16 03:14:02.086121 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 03:14:02.086166 kernel: intel_pstate: CPU model not supported Dec 16 03:14:02.086183 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:14:02.086200 kernel: Segment Routing with IPv6 Dec 16 03:14:02.086216 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:14:02.086232 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:14:02.086247 kernel: Key type dns_resolver registered Dec 16 03:14:02.086262 kernel: IPI shorthand broadcast: enabled Dec 16 03:14:02.086276 kernel: sched_clock: Marking stable (1930003956, 143959423)->(2205706461, -131743082) Dec 16 03:14:02.086298 kernel: registered taskstats version 1 Dec 16 03:14:02.086313 kernel: Loading compiled-in X.509 certificates Dec 16 03:14:02.086329 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:14:02.086344 kernel: Demotion targets for Node 0: null Dec 16 03:14:02.086359 kernel: Key type .fscrypt registered Dec 16 03:14:02.086374 kernel: Key type fscrypt-provisioning registered Dec 16 03:14:02.086416 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:14:02.086437 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:14:02.086453 kernel: ima: No architecture policies found Dec 16 03:14:02.086468 kernel: clk: Disabling unused clocks Dec 16 03:14:02.086483 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:14:02.086499 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:14:02.086514 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:14:02.086530 kernel: Run /init as init process Dec 16 03:14:02.086551 kernel: with arguments: Dec 16 03:14:02.086567 kernel: /init Dec 16 03:14:02.086583 kernel: with environment: Dec 16 03:14:02.086598 kernel: HOME=/ Dec 16 03:14:02.086613 kernel: TERM=linux Dec 16 03:14:02.086630 kernel: SCSI subsystem initialized Dec 16 03:14:02.086646 kernel: libata version 3.00 loaded. Dec 16 03:14:02.090082 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 16 03:14:02.090407 kernel: scsi host0: ata_piix Dec 16 03:14:02.090657 kernel: scsi host1: ata_piix Dec 16 03:14:02.090684 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Dec 16 03:14:02.090700 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Dec 16 03:14:02.090715 kernel: ACPI: bus type USB registered Dec 16 03:14:02.090748 kernel: usbcore: registered new interface driver usbfs Dec 16 03:14:02.090764 kernel: usbcore: registered new interface driver hub Dec 16 03:14:02.092579 kernel: usbcore: registered new device driver usb Dec 16 03:14:02.092962 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 16 03:14:02.093118 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 16 03:14:02.093255 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 16 03:14:02.093420 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 16 03:14:02.093627 kernel: hub 1-0:1.0: USB hub found Dec 16 03:14:02.093771 kernel: hub 1-0:1.0: 2 ports detected Dec 16 03:14:02.094006 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 16 03:14:02.094142 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 03:14:02.094157 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:14:02.094168 kernel: GPT:16515071 != 125829119 Dec 16 03:14:02.094178 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:14:02.094188 kernel: GPT:16515071 != 125829119 Dec 16 03:14:02.094198 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:14:02.094212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 03:14:02.094349 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 16 03:14:02.094478 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Dec 16 03:14:02.094620 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Dec 16 03:14:02.094767 kernel: scsi host2: Virtio SCSI HBA Dec 16 03:14:02.094796 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:14:02.094811 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:14:02.094822 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:14:02.094832 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:14:02.094843 kernel: raid6: avx2x4 gen() 17437 MB/s Dec 16 03:14:02.094853 kernel: raid6: avx2x2 gen() 17636 MB/s Dec 16 03:14:02.094863 kernel: raid6: avx2x1 gen() 13333 MB/s Dec 16 03:14:02.094874 kernel: raid6: using algorithm avx2x2 gen() 17636 MB/s Dec 16 03:14:02.094887 kernel: raid6: .... xor() 19186 MB/s, rmw enabled Dec 16 03:14:02.094898 kernel: raid6: using avx2x2 recovery algorithm Dec 16 03:14:02.094909 kernel: xor: automatically using best checksumming function avx Dec 16 03:14:02.094919 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:14:02.094929 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (162) Dec 16 03:14:02.094939 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:14:02.094950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:14:02.094964 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:14:02.094974 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:14:02.094984 kernel: loop: module loaded Dec 16 03:14:02.094995 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:14:02.095005 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:14:02.095016 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:14:02.095033 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:14:02.095044 systemd[1]: Detected virtualization kvm. Dec 16 03:14:02.095054 systemd[1]: Detected architecture x86-64. Dec 16 03:14:02.095065 systemd[1]: Running in initrd. Dec 16 03:14:02.095075 systemd[1]: No hostname configured, using default hostname. Dec 16 03:14:02.095086 systemd[1]: Hostname set to . Dec 16 03:14:02.095100 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:14:02.095111 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:14:02.095121 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:14:02.095131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:14:02.095142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:14:02.095158 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:14:02.095168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:14:02.095183 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:14:02.095194 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:14:02.095204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:14:02.095215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:14:02.095226 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:14:02.095239 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:14:02.095250 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:14:02.095261 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:14:02.095274 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:14:02.095285 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:14:02.095296 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:14:02.095306 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:14:02.095320 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:14:02.095332 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:14:02.095342 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:14:02.095353 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:14:02.095364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:14:02.095375 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:14:02.095385 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:14:02.095399 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:14:02.095410 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:14:02.095421 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:14:02.095432 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:14:02.095442 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:14:02.095453 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:14:02.095463 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:14:02.095478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:14:02.095489 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:14:02.095499 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:14:02.095513 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:14:02.095524 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:14:02.095567 systemd-journald[299]: Collecting audit messages is enabled. Dec 16 03:14:02.095596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:14:02.095607 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:14:02.095617 kernel: Bridge firewalling registered Dec 16 03:14:02.095629 systemd-journald[299]: Journal started Dec 16 03:14:02.095651 systemd-journald[299]: Runtime Journal (/run/log/journal/07647a9e05464e36ac0ba80722a78d5b) is 4.8M, max 39.1M, 34.2M free. Dec 16 03:14:02.091857 systemd-modules-load[301]: Inserted module 'br_netfilter' Dec 16 03:14:02.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.150797 kernel: audit: type=1130 audit(1765854842.148:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.150833 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:14:02.156347 kernel: audit: type=1130 audit(1765854842.154:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.156257 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:14:02.164103 kernel: audit: type=1130 audit(1765854842.159:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.160442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:14:02.168978 kernel: audit: type=1130 audit(1765854842.163:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.169952 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:14:02.171959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:14:02.174283 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:14:02.178997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:14:02.204183 systemd-tmpfiles[318]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:14:02.208760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:14:02.215680 kernel: audit: type=1130 audit(1765854842.208:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.214735 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:14:02.220834 kernel: audit: type=1130 audit(1765854842.215:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.221267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:14:02.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.226818 kernel: audit: type=1130 audit(1765854842.221:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.226000 audit: BPF prog-id=6 op=LOAD Dec 16 03:14:02.228835 kernel: audit: type=1334 audit(1765854842.226:9): prog-id=6 op=LOAD Dec 16 03:14:02.228959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:14:02.229851 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:14:02.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.236817 kernel: audit: type=1130 audit(1765854842.230:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.238769 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:14:02.272673 dracut-cmdline[339]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:14:02.295001 systemd-resolved[335]: Positive Trust Anchors: Dec 16 03:14:02.295894 systemd-resolved[335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:14:02.295903 systemd-resolved[335]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:14:02.295961 systemd-resolved[335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:14:02.339138 systemd-resolved[335]: Defaulting to hostname 'linux'. Dec 16 03:14:02.341897 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:14:02.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.343515 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:14:02.399816 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:14:02.416834 kernel: iscsi: registered transport (tcp) Dec 16 03:14:02.442838 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:14:02.442940 kernel: QLogic iSCSI HBA Driver Dec 16 03:14:02.477426 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:14:02.503087 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:14:02.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.504106 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:14:02.572395 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:14:02.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.579192 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:14:02.580493 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:14:02.637619 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:14:02.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.638000 audit: BPF prog-id=7 op=LOAD Dec 16 03:14:02.638000 audit: BPF prog-id=8 op=LOAD Dec 16 03:14:02.641315 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:14:02.673282 systemd-udevd[580]: Using default interface naming scheme 'v257'. Dec 16 03:14:02.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.687264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:14:02.693355 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:14:02.729312 dracut-pre-trigger[631]: rd.md=0: removing MD RAID activation Dec 16 03:14:02.759011 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:14:02.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.760000 audit: BPF prog-id=9 op=LOAD Dec 16 03:14:02.763095 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:14:02.776226 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:14:02.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.779558 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:14:02.834049 systemd-networkd[706]: lo: Link UP Dec 16 03:14:02.834061 systemd-networkd[706]: lo: Gained carrier Dec 16 03:14:02.836504 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:14:02.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.837146 systemd[1]: Reached target network.target - Network. Dec 16 03:14:02.887345 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:14:02.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.892312 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:14:03.014123 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 03:14:03.042455 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 03:14:03.083820 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:14:03.091505 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:14:03.101289 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 03:14:03.103994 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:14:03.142025 kernel: AES CTR mode by8 optimization enabled Dec 16 03:14:03.135170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:14:03.135406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:14:03.136931 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:14:03.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:03.138463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:14:03.146269 systemd-networkd[706]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:14:03.146274 systemd-networkd[706]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:14:03.147060 systemd-networkd[706]: eth1: Link UP Dec 16 03:14:03.147887 systemd-networkd[706]: eth1: Gained carrier Dec 16 03:14:03.164995 disk-uuid[772]: Primary Header is updated. Dec 16 03:14:03.164995 disk-uuid[772]: Secondary Entries is updated. Dec 16 03:14:03.164995 disk-uuid[772]: Secondary Header is updated. Dec 16 03:14:03.147910 systemd-networkd[706]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:14:03.156831 systemd-networkd[706]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Dec 16 03:14:03.159125 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 16 03:14:03.192207 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 03:14:03.162272 systemd-networkd[706]: eth0: Link UP Dec 16 03:14:03.162562 systemd-networkd[706]: eth0: Gained carrier Dec 16 03:14:03.162585 systemd-networkd[706]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Dec 16 03:14:03.181942 systemd-networkd[706]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Dec 16 03:14:03.198935 systemd-networkd[706]: eth0: DHCPv4 address 146.190.151.166/20, gateway 146.190.144.1 acquired from 169.254.169.253 Dec 16 03:14:03.298915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:14:03.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:03.378250 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:14:03.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:03.379442 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:14:03.379978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:14:03.380448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:14:03.383590 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:14:03.418183 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:14:03.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.269578 disk-uuid[779]: Warning: The kernel is still using the old partition table. Dec 16 03:14:04.269578 disk-uuid[779]: The new table will be used at the next reboot or after you Dec 16 03:14:04.269578 disk-uuid[779]: run partprobe(8) or kpartx(8) Dec 16 03:14:04.269578 disk-uuid[779]: The operation has completed successfully. Dec 16 03:14:04.279509 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:14:04.279689 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:14:04.290676 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 03:14:04.290716 kernel: audit: type=1130 audit(1765854844.279:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.290741 kernel: audit: type=1131 audit(1765854844.279:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.283191 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:14:04.319963 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Dec 16 03:14:04.320036 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:14:04.323810 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:14:04.327490 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:14:04.327569 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:14:04.335836 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:14:04.336506 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:14:04.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.342759 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:14:04.343456 kernel: audit: type=1130 audit(1765854844.337:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.535827 systemd-networkd[706]: eth1: Gained IPv6LL Dec 16 03:14:04.574810 ignition[853]: Ignition 2.24.0 Dec 16 03:14:04.574827 ignition[853]: Stage: fetch-offline Dec 16 03:14:04.574889 ignition[853]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:14:04.574901 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:14:04.576867 ignition[853]: parsed url from cmdline: "" Dec 16 03:14:04.577896 ignition[853]: no config URL provided Dec 16 03:14:04.578252 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:14:04.578286 ignition[853]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:14:04.578295 ignition[853]: failed to fetch config: resource requires networking Dec 16 03:14:04.581000 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:14:04.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.578960 ignition[853]: Ignition finished successfully Dec 16 03:14:04.593287 kernel: audit: type=1130 audit(1765854844.581:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.584958 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 03:14:04.630888 ignition[861]: Ignition 2.24.0 Dec 16 03:14:04.630904 ignition[861]: Stage: fetch Dec 16 03:14:04.631195 ignition[861]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:14:04.631211 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:14:04.631397 ignition[861]: parsed url from cmdline: "" Dec 16 03:14:04.631404 ignition[861]: no config URL provided Dec 16 03:14:04.631423 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:14:04.631436 ignition[861]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:14:04.631494 ignition[861]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 16 03:14:04.659554 ignition[861]: GET result: OK Dec 16 03:14:04.659840 ignition[861]: parsing config with SHA512: 5020d649bbc47fb6ebfb3779ee823f79ccf21552f8c512d9772bcae8215d66a74bc256f96649efe7c3d1f9025b008360565de865f50bb4e34dad141728b46067 Dec 16 03:14:04.669047 unknown[861]: fetched base config from "system" Dec 16 03:14:04.669527 ignition[861]: fetch: fetch complete Dec 16 03:14:04.669061 unknown[861]: fetched base config from "system" Dec 16 03:14:04.669534 ignition[861]: fetch: fetch passed Dec 16 03:14:04.669070 unknown[861]: fetched user config from "digitalocean" Dec 16 03:14:04.669586 ignition[861]: Ignition finished successfully Dec 16 03:14:04.674540 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 03:14:04.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.676138 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:14:04.680312 kernel: audit: type=1130 audit(1765854844.674:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.711508 ignition[868]: Ignition 2.24.0 Dec 16 03:14:04.711521 ignition[868]: Stage: kargs Dec 16 03:14:04.711762 ignition[868]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:14:04.711792 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:14:04.712964 ignition[868]: kargs: kargs passed Dec 16 03:14:04.714502 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:14:04.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.713016 ignition[868]: Ignition finished successfully Dec 16 03:14:04.720389 kernel: audit: type=1130 audit(1765854844.714:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.718986 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:14:04.752304 ignition[875]: Ignition 2.24.0 Dec 16 03:14:04.752329 ignition[875]: Stage: disks Dec 16 03:14:04.752551 ignition[875]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:14:04.752561 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:14:04.754320 ignition[875]: disks: disks passed Dec 16 03:14:04.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.756421 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:14:04.762918 kernel: audit: type=1130 audit(1765854844.756:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.754427 ignition[875]: Ignition finished successfully Dec 16 03:14:04.760567 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:14:04.761621 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:14:04.762106 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:14:04.762561 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:14:04.763894 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:14:04.765818 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:14:04.807555 systemd-fsck[883]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 03:14:04.810331 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:14:04.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.813918 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:14:04.816678 kernel: audit: type=1130 audit(1765854844.810:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:04.853934 systemd-networkd[706]: eth0: Gained IPv6LL Dec 16 03:14:04.948837 kernel: EXT4-fs (vda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:14:04.949113 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:14:04.950274 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:14:04.952618 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:14:04.954628 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:14:04.961218 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Dec 16 03:14:04.967634 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 03:14:04.970881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Dec 16 03:14:04.970913 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:14:04.970927 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:14:04.975803 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:14:04.975852 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:14:04.980719 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:14:04.981679 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:14:04.984459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:14:04.985058 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:14:04.990545 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:14:05.074326 coreos-metadata[894]: Dec 16 03:14:05.073 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 03:14:05.085538 coreos-metadata[894]: Dec 16 03:14:05.085 INFO Fetch successful Dec 16 03:14:05.088239 coreos-metadata[893]: Dec 16 03:14:05.088 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 03:14:05.093529 coreos-metadata[894]: Dec 16 03:14:05.093 INFO wrote hostname ci-4547.0.0-7-1189c174c4 to /sysroot/etc/hostname Dec 16 03:14:05.095452 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 03:14:05.103213 kernel: audit: type=1130 audit(1765854845.095:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:05.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:05.103654 coreos-metadata[893]: Dec 16 03:14:05.100 INFO Fetch successful Dec 16 03:14:05.110929 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Dec 16 03:14:05.111704 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Dec 16 03:14:05.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:05.116827 kernel: audit: type=1130 audit(1765854845.112:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:05.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:05.224982 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:14:05.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:05.227446 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:14:05.229049 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:14:05.252010 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:14:05.254153 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:14:05.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:05.274588 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:14:05.287468 ignition[997]: INFO : Ignition 2.24.0 Dec 16 03:14:05.287468 ignition[997]: INFO : Stage: mount Dec 16 03:14:05.288875 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:14:05.288875 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:14:05.291061 ignition[997]: INFO : mount: mount passed Dec 16 03:14:05.291061 ignition[997]: INFO : Ignition finished successfully Dec 16 03:14:05.292091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:14:05.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:05.294505 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:14:05.315535 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:14:05.343831 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Dec 16 03:14:05.345819 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:14:05.347804 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:14:05.352430 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:14:05.352516 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:14:05.355020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:14:05.384603 ignition[1025]: INFO : Ignition 2.24.0 Dec 16 03:14:05.384603 ignition[1025]: INFO : Stage: files Dec 16 03:14:05.390585 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:14:05.390585 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:14:05.390585 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:14:05.390585 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:14:05.390585 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:14:05.394018 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:14:05.394018 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:14:05.395633 unknown[1025]: wrote ssh authorized keys file for user: core Dec 16 03:14:05.396383 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:14:05.397304 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:14:05.398276 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 03:14:05.542682 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:14:05.581690 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:14:05.581690 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:14:05.584375 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:14:05.584375 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:14:05.584375 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:14:05.584375 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:14:05.584375 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:14:05.584375 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:14:05.584375 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:14:05.592290 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:14:05.592290 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:14:05.592290 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:14:05.592290 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:14:05.592290 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:14:05.592290 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 03:14:06.068855 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:14:09.021533 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:14:09.022843 ignition[1025]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:14:09.024048 ignition[1025]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:14:09.025809 ignition[1025]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:14:09.025809 ignition[1025]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:14:09.025809 ignition[1025]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:14:09.025809 ignition[1025]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:14:09.028881 ignition[1025]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:14:09.028881 ignition[1025]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:14:09.028881 ignition[1025]: INFO : files: files passed Dec 16 03:14:09.028881 ignition[1025]: INFO : Ignition finished successfully Dec 16 03:14:09.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.030603 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:14:09.035003 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:14:09.037993 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:14:09.053754 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:14:09.053935 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:14:09.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.065133 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:14:09.065133 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:14:09.067519 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:14:09.068636 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:14:09.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.069826 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:14:09.071885 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:14:09.120581 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:14:09.120721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:14:09.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.122267 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:14:09.122889 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:14:09.123960 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:14:09.125166 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:14:09.153378 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:14:09.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.155827 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:14:09.182691 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:14:09.184142 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:14:09.184697 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:14:09.185346 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:14:09.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.186532 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:14:09.186683 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:14:09.188009 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:14:09.188995 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:14:09.190041 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:14:09.190939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:14:09.191694 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:14:09.192713 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:14:09.193652 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:14:09.194605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:14:09.195510 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:14:09.196523 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:14:09.197442 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:14:09.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.198341 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:14:09.198521 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:14:09.199595 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:14:09.200371 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:14:09.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.201341 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:14:09.201610 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:14:09.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.206464 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:14:09.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.206644 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:14:09.207943 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:14:09.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.208189 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:14:09.208978 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:14:09.209158 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:14:09.210268 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 03:14:09.210447 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 03:14:09.212922 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:14:09.216926 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:14:09.218010 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:14:09.218713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:14:09.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.220085 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:14:09.220918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:14:09.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.223435 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:14:09.224055 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:14:09.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.229214 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:14:09.229953 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:14:09.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.255342 ignition[1081]: INFO : Ignition 2.24.0 Dec 16 03:14:09.257042 ignition[1081]: INFO : Stage: umount Dec 16 03:14:09.257042 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:14:09.257042 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 16 03:14:09.260360 ignition[1081]: INFO : umount: umount passed Dec 16 03:14:09.260360 ignition[1081]: INFO : Ignition finished successfully Dec 16 03:14:09.258595 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:14:09.262180 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:14:09.262312 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:14:09.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.265341 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:14:09.265457 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:14:09.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.267045 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:14:09.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.267112 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:14:09.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.267715 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 03:14:09.267776 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 03:14:09.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.268624 systemd[1]: Stopped target network.target - Network. Dec 16 03:14:09.269482 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:14:09.269642 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:14:09.270550 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:14:09.271407 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:14:09.274868 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:14:09.275950 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:14:09.276395 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:14:09.277327 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:14:09.277378 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:14:09.278259 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:14:09.278323 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:14:09.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.278982 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:14:09.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.279011 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:14:09.279764 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:14:09.279859 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:14:09.295852 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 16 03:14:09.295900 kernel: audit: type=1131 audit(1765854849.284:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.295926 kernel: audit: type=1131 audit(1765854849.291:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.280528 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:14:09.280574 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:14:09.281401 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:14:09.282370 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:14:09.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.285060 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:14:09.304140 kernel: audit: type=1131 audit(1765854849.295:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.285170 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:14:09.310769 kernel: audit: type=1334 audit(1765854849.303:69): prog-id=6 op=UNLOAD Dec 16 03:14:09.310823 kernel: audit: type=1131 audit(1765854849.305:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.303000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:14:09.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.286351 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:14:09.313485 kernel: audit: type=1334 audit(1765854849.310:71): prog-id=9 op=UNLOAD Dec 16 03:14:09.310000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:14:09.286499 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:14:09.295973 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:14:09.296110 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:14:09.304004 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:14:09.304130 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:14:09.334288 kernel: audit: type=1131 audit(1765854849.318:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.334319 kernel: audit: type=1131 audit(1765854849.318:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.334333 kernel: audit: type=1131 audit(1765854849.319:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.309019 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:14:09.313108 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:14:09.313182 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:14:09.315196 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:14:09.317093 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:14:09.317177 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:14:09.319203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:14:09.319268 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:14:09.319760 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:14:09.319830 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:14:09.320340 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:14:09.337339 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:14:09.338849 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:14:09.351819 kernel: audit: type=1131 audit(1765854849.343:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.344234 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:14:09.344282 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:14:09.352290 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:14:09.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.352337 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:14:09.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.353141 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:14:09.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.353202 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:14:09.354485 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:14:09.354546 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:14:09.355427 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:14:09.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.355493 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:14:09.357383 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:14:09.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.359475 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:14:09.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.359561 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:14:09.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.360942 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:14:09.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.361015 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:14:09.363195 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 03:14:09.363269 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:14:09.364224 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:14:09.364291 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:14:09.366316 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:14:09.366376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:14:09.376819 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:14:09.376942 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:14:09.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.383409 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:14:09.383530 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:14:09.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:09.384928 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:14:09.386762 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:14:09.410541 systemd[1]: Switching root. Dec 16 03:14:09.463671 systemd-journald[299]: Journal stopped Dec 16 03:14:10.647267 systemd-journald[299]: Received SIGTERM from PID 1 (systemd). Dec 16 03:14:10.647335 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:14:10.647361 kernel: SELinux: policy capability open_perms=1 Dec 16 03:14:10.647381 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:14:10.647420 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:14:10.647439 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:14:10.647457 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:14:10.647480 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:14:10.647503 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:14:10.647516 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:14:10.647563 systemd[1]: Successfully loaded SELinux policy in 62.983ms. Dec 16 03:14:10.647590 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.365ms. Dec 16 03:14:10.647606 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:14:10.647620 systemd[1]: Detected virtualization kvm. Dec 16 03:14:10.647633 systemd[1]: Detected architecture x86-64. Dec 16 03:14:10.647650 systemd[1]: Detected first boot. Dec 16 03:14:10.647664 systemd[1]: Hostname set to . Dec 16 03:14:10.647679 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:14:10.647693 zram_generator::config[1127]: No configuration found. Dec 16 03:14:10.647713 kernel: Guest personality initialized and is inactive Dec 16 03:14:10.647734 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:14:10.647752 kernel: Initialized host personality Dec 16 03:14:10.647776 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:14:10.648695 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:14:10.648730 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:14:10.648745 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:14:10.648760 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:14:10.648780 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:14:10.648806 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:14:10.648829 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:14:10.648847 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:14:10.648861 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:14:10.648875 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:14:10.648889 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:14:10.648903 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:14:10.648917 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:14:10.648933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:14:10.648946 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:14:10.648961 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:14:10.648974 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:14:10.648988 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:14:10.649004 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:14:10.649018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:14:10.649033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:14:10.649046 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:14:10.649059 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:14:10.649072 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:14:10.649086 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:14:10.649103 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:14:10.649117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:14:10.649133 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:14:10.649147 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:14:10.649161 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:14:10.649174 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:14:10.649188 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:14:10.649205 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:14:10.649219 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:14:10.649232 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:14:10.649246 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:14:10.649260 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:14:10.649276 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:14:10.649290 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:14:10.649307 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:14:10.649321 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:14:10.649334 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:14:10.649348 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:14:10.649361 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:14:10.649375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:10.649389 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:14:10.649406 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:14:10.649419 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:14:10.649433 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:14:10.649447 systemd[1]: Reached target machines.target - Containers. Dec 16 03:14:10.649460 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:14:10.649473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:14:10.649490 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:14:10.649523 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:14:10.649543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:14:10.649562 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:14:10.649577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:14:10.649591 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:14:10.649605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:14:10.649622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:14:10.649636 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:14:10.649650 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:14:10.649665 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:14:10.649697 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:14:10.649720 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:14:10.649740 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:14:10.649759 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:14:10.649800 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:14:10.649817 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:14:10.649830 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:14:10.649845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:14:10.649862 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:10.649879 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:14:10.649893 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:14:10.649907 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:14:10.649922 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:14:10.649936 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:14:10.649951 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:14:10.649964 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:14:10.649982 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:14:10.649996 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:14:10.650009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:14:10.650023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:14:10.650037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:14:10.650056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:14:10.650078 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:14:10.650104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:14:10.650125 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:14:10.650146 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:14:10.650168 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:14:10.650235 systemd-journald[1199]: Collecting audit messages is enabled. Dec 16 03:14:10.650267 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:14:10.650285 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:14:10.650300 systemd-journald[1199]: Journal started Dec 16 03:14:10.650328 systemd-journald[1199]: Runtime Journal (/run/log/journal/07647a9e05464e36ac0ba80722a78d5b) is 4.8M, max 39.1M, 34.2M free. Dec 16 03:14:10.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.517000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:14:10.517000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:14:10.518000 audit: BPF prog-id=15 op=LOAD Dec 16 03:14:10.518000 audit: BPF prog-id=16 op=LOAD Dec 16 03:14:10.524000 audit: BPF prog-id=17 op=LOAD Dec 16 03:14:10.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.642000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:14:10.642000 audit[1199]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd5d40cd00 a2=4000 a3=0 items=0 ppid=1 pid=1199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:10.642000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:14:10.302352 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:14:10.315569 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 03:14:10.316136 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:14:10.655821 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:14:10.662813 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:14:10.666801 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:14:10.669886 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:14:10.676514 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:14:10.676602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:14:10.685827 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:14:10.690208 kernel: ACPI: bus type drm_connector registered Dec 16 03:14:10.690306 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:14:10.693846 kernel: fuse: init (API version 7.41) Dec 16 03:14:10.695813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:14:10.703812 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:14:10.711809 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:14:10.717141 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:14:10.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.719558 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:14:10.721273 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:14:10.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.722545 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:14:10.723862 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:14:10.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.724988 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:14:10.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.726716 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:14:10.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.729970 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:14:10.731066 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:14:10.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.743856 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:14:10.757721 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:14:10.762690 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:14:10.768442 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:14:10.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.772171 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:14:10.783073 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:14:10.790039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:14:10.790401 systemd-journald[1199]: Time spent on flushing to /var/log/journal/07647a9e05464e36ac0ba80722a78d5b is 51.817ms for 1148 entries. Dec 16 03:14:10.790401 systemd-journald[1199]: System Journal (/var/log/journal/07647a9e05464e36ac0ba80722a78d5b) is 8M, max 163.5M, 155.5M free. Dec 16 03:14:10.846781 systemd-journald[1199]: Received client request to flush runtime journal. Dec 16 03:14:10.846958 kernel: loop2: detected capacity change from 0 to 8 Dec 16 03:14:10.847118 kernel: loop3: detected capacity change from 0 to 229808 Dec 16 03:14:10.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.839989 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Dec 16 03:14:10.840005 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Dec 16 03:14:10.848055 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:14:10.852005 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:14:10.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.856064 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:14:10.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.861268 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:14:10.890849 kernel: loop4: detected capacity change from 0 to 50784 Dec 16 03:14:10.891967 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:14:10.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.920224 kernel: loop5: detected capacity change from 0 to 111560 Dec 16 03:14:10.923314 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:14:10.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:10.925000 audit: BPF prog-id=18 op=LOAD Dec 16 03:14:10.925000 audit: BPF prog-id=19 op=LOAD Dec 16 03:14:10.925000 audit: BPF prog-id=20 op=LOAD Dec 16 03:14:10.927176 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:14:10.929000 audit: BPF prog-id=21 op=LOAD Dec 16 03:14:10.931027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:14:10.934890 kernel: loop6: detected capacity change from 0 to 8 Dec 16 03:14:10.935049 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:14:10.940832 kernel: loop7: detected capacity change from 0 to 229808 Dec 16 03:14:10.945000 audit: BPF prog-id=22 op=LOAD Dec 16 03:14:10.945000 audit: BPF prog-id=23 op=LOAD Dec 16 03:14:10.945000 audit: BPF prog-id=24 op=LOAD Dec 16 03:14:10.949028 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:14:10.951000 audit: BPF prog-id=25 op=LOAD Dec 16 03:14:10.953000 audit: BPF prog-id=26 op=LOAD Dec 16 03:14:10.953000 audit: BPF prog-id=27 op=LOAD Dec 16 03:14:10.954663 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:14:10.959826 kernel: loop1: detected capacity change from 0 to 50784 Dec 16 03:14:10.972572 (sd-merge)[1270]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Dec 16 03:14:10.985620 (sd-merge)[1270]: Merged extensions into '/usr'. Dec 16 03:14:10.998461 systemd[1]: Reload requested from client PID 1230 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:14:10.998484 systemd[1]: Reloading... Dec 16 03:14:11.029962 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Dec 16 03:14:11.030296 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Dec 16 03:14:11.103818 zram_generator::config[1302]: No configuration found. Dec 16 03:14:11.130303 systemd-nsresourced[1275]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:14:11.328651 systemd-oomd[1272]: No swap; memory pressure usage will be degraded Dec 16 03:14:11.375715 systemd-resolved[1273]: Positive Trust Anchors: Dec 16 03:14:11.375732 systemd-resolved[1273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:14:11.375737 systemd-resolved[1273]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:14:11.375774 systemd-resolved[1273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:14:11.403885 systemd-resolved[1273]: Using system hostname 'ci-4547.0.0-7-1189c174c4'. Dec 16 03:14:11.441302 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:14:11.441489 systemd[1]: Reloading finished in 442 ms. Dec 16 03:14:11.460006 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:14:11.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:11.460715 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:14:11.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:11.461537 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:14:11.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:11.462314 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:14:11.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:11.463139 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:14:11.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:11.464329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:14:11.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:11.471695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:14:11.474421 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:14:11.482231 systemd[1]: Starting ensure-sysext.service... Dec 16 03:14:11.486071 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:14:11.490000 audit: BPF prog-id=28 op=LOAD Dec 16 03:14:11.490000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:14:11.491000 audit: BPF prog-id=29 op=LOAD Dec 16 03:14:11.491000 audit: BPF prog-id=30 op=LOAD Dec 16 03:14:11.491000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:14:11.491000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:14:11.491000 audit: BPF prog-id=31 op=LOAD Dec 16 03:14:11.492000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:14:11.492000 audit: BPF prog-id=32 op=LOAD Dec 16 03:14:11.492000 audit: BPF prog-id=33 op=LOAD Dec 16 03:14:11.492000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:14:11.492000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:14:11.492000 audit: BPF prog-id=34 op=LOAD Dec 16 03:14:11.492000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:14:11.494000 audit: BPF prog-id=35 op=LOAD Dec 16 03:14:11.494000 audit: BPF prog-id=36 op=LOAD Dec 16 03:14:11.494000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:14:11.494000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:14:11.496000 audit: BPF prog-id=37 op=LOAD Dec 16 03:14:11.496000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:14:11.498000 audit: BPF prog-id=38 op=LOAD Dec 16 03:14:11.498000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:14:11.498000 audit: BPF prog-id=39 op=LOAD Dec 16 03:14:11.498000 audit: BPF prog-id=40 op=LOAD Dec 16 03:14:11.498000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:14:11.498000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:14:11.502969 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:14:11.534313 systemd[1]: Reload requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:14:11.534340 systemd[1]: Reloading... Dec 16 03:14:11.551148 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:14:11.551194 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:14:11.551505 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:14:11.554928 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Dec 16 03:14:11.554994 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Dec 16 03:14:11.567934 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:14:11.567946 systemd-tmpfiles[1365]: Skipping /boot Dec 16 03:14:11.592126 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:14:11.592140 systemd-tmpfiles[1365]: Skipping /boot Dec 16 03:14:11.667809 zram_generator::config[1401]: No configuration found. Dec 16 03:14:11.895952 systemd[1]: Reloading finished in 361 ms. Dec 16 03:14:11.906907 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:14:11.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:11.908000 audit: BPF prog-id=41 op=LOAD Dec 16 03:14:11.908000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:14:11.908000 audit: BPF prog-id=42 op=LOAD Dec 16 03:14:11.908000 audit: BPF prog-id=43 op=LOAD Dec 16 03:14:11.908000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:14:11.908000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:14:11.909000 audit: BPF prog-id=44 op=LOAD Dec 16 03:14:11.909000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:14:11.909000 audit: BPF prog-id=45 op=LOAD Dec 16 03:14:11.909000 audit: BPF prog-id=46 op=LOAD Dec 16 03:14:11.910000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:14:11.910000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:14:11.910000 audit: BPF prog-id=47 op=LOAD Dec 16 03:14:11.910000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:14:11.910000 audit: BPF prog-id=48 op=LOAD Dec 16 03:14:11.910000 audit: BPF prog-id=49 op=LOAD Dec 16 03:14:11.910000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:14:11.910000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:14:11.911000 audit: BPF prog-id=50 op=LOAD Dec 16 03:14:11.911000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:14:11.911000 audit: BPF prog-id=51 op=LOAD Dec 16 03:14:11.912000 audit: BPF prog-id=52 op=LOAD Dec 16 03:14:11.912000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:14:11.912000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:14:11.912000 audit: BPF prog-id=53 op=LOAD Dec 16 03:14:11.912000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:14:11.926616 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:14:11.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:11.937736 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:14:11.941072 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:14:11.943326 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:14:11.950482 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:14:11.952000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:14:11.952000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:14:11.952000 audit: BPF prog-id=54 op=LOAD Dec 16 03:14:11.953000 audit: BPF prog-id=55 op=LOAD Dec 16 03:14:11.954625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:14:11.960291 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:14:11.964668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:11.965948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:14:11.968150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:14:11.975220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:14:11.979253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:14:11.980007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:14:11.980228 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:14:11.980343 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:14:11.980440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:11.989364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:11.989597 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:14:11.990920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:14:11.991151 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:14:11.991250 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:14:11.991351 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:11.997816 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:11.998071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:14:12.000648 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:14:12.001318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:14:12.001561 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:14:12.001663 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:14:12.001849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:12.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.010915 systemd[1]: Finished ensure-sysext.service. Dec 16 03:14:12.016000 audit: BPF prog-id=56 op=LOAD Dec 16 03:14:12.026000 audit[1448]: SYSTEM_BOOT pid=1448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.030562 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 03:14:12.033165 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:14:12.033433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:14:12.035270 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:14:12.035919 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:14:12.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.053213 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:14:12.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.087618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:14:12.089773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:14:12.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.092503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:14:12.092744 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:14:12.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.095168 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:14:12.095228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:14:12.125186 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 03:14:12.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.126001 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:14:12.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.154744 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:14:12.162347 systemd-udevd[1447]: Using default interface naming scheme 'v257'. Dec 16 03:14:12.167122 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:14:12.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.171152 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:14:12.175000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:14:12.175000 audit[1484]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddf859320 a2=420 a3=0 items=0 ppid=1443 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:12.175000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:14:12.176452 augenrules[1484]: No rules Dec 16 03:14:12.177138 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:14:12.177471 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:14:12.202206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:14:12.215073 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:14:12.381897 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:14:12.399922 systemd-networkd[1498]: lo: Link UP Dec 16 03:14:12.399934 systemd-networkd[1498]: lo: Gained carrier Dec 16 03:14:12.407665 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:14:12.408994 systemd[1]: Reached target network.target - Network. Dec 16 03:14:12.412952 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:14:12.416407 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:14:12.474459 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:14:12.477938 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:14:12.490741 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Dec 16 03:14:12.494779 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 16 03:14:12.496863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:12.497025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:14:12.499183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:14:12.508346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:14:12.518322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:14:12.520002 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:14:12.520136 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:14:12.520167 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:14:12.520201 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:14:12.520216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:14:12.524193 systemd-networkd[1498]: eth0: Configuring with /run/systemd/network/10-02:60:91:25:84:f9.network. Dec 16 03:14:12.525291 systemd-networkd[1498]: eth0: Link UP Dec 16 03:14:12.525440 systemd-networkd[1498]: eth0: Gained carrier Dec 16 03:14:12.539447 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Dec 16 03:14:12.567905 kernel: ISO 9660 Extensions: RRIP_1991A Dec 16 03:14:12.568261 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 16 03:14:12.581958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:14:12.582870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:14:12.588245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:14:12.589879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:14:12.591368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:14:12.594718 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:14:12.596132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:14:12.598279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:14:12.611819 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 03:14:12.632967 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:14:12.636500 systemd-networkd[1498]: eth1: Configuring with /run/systemd/network/10-ae:f5:53:07:db:54.network. Dec 16 03:14:12.641545 systemd-networkd[1498]: eth1: Link UP Dec 16 03:14:12.642283 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Dec 16 03:14:12.642734 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Dec 16 03:14:12.644074 systemd-networkd[1498]: eth1: Gained carrier Dec 16 03:14:12.649920 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Dec 16 03:14:12.650657 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Dec 16 03:14:12.682807 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 16 03:14:12.699021 ldconfig[1445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:14:12.703408 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:14:12.706432 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:14:12.718024 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:14:12.722059 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:14:12.730806 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 03:14:12.758791 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:14:12.759418 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:14:12.759981 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:14:12.760489 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:14:12.761588 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:14:12.762537 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:14:12.763377 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:14:12.765020 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:14:12.765809 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:14:12.766232 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:14:12.766676 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:14:12.766712 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:14:12.767141 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:14:12.768355 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:14:12.771315 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:14:12.777599 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:14:12.778293 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:14:12.778734 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:14:12.787644 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:14:12.788586 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:14:12.790190 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:14:12.790926 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:14:12.795342 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:14:12.795812 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:14:12.796267 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:14:12.796290 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:14:12.797444 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:14:12.801043 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 03:14:12.807256 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:14:12.811243 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:14:12.822004 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:14:12.827871 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:14:12.828430 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:14:12.836085 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:14:12.841770 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:14:12.850148 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:14:12.854631 jq[1557]: false Dec 16 03:14:12.856551 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:14:12.863107 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:14:12.870669 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:14:12.871886 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:14:12.879131 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:14:12.893052 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:14:12.897894 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing passwd entry cache Dec 16 03:14:12.896384 oslogin_cache_refresh[1559]: Refreshing passwd entry cache Dec 16 03:14:12.906813 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting users, quitting Dec 16 03:14:12.906813 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:14:12.906813 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing group entry cache Dec 16 03:14:12.906813 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting groups, quitting Dec 16 03:14:12.906813 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:14:12.905452 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:14:12.902620 oslogin_cache_refresh[1559]: Failure getting users, quitting Dec 16 03:14:12.907289 extend-filesystems[1558]: Found /dev/vda6 Dec 16 03:14:12.902641 oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:14:12.902690 oslogin_cache_refresh[1559]: Refreshing group entry cache Dec 16 03:14:12.904438 oslogin_cache_refresh[1559]: Failure getting groups, quitting Dec 16 03:14:12.904449 oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:14:12.909666 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:14:12.910523 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:14:12.912887 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:14:12.913303 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:14:12.916096 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:14:12.917964 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:14:12.918259 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:14:12.924355 extend-filesystems[1558]: Found /dev/vda9 Dec 16 03:14:12.940934 extend-filesystems[1558]: Checking size of /dev/vda9 Dec 16 03:14:12.964882 jq[1571]: true Dec 16 03:14:12.978888 coreos-metadata[1554]: Dec 16 03:14:12.976 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 03:14:12.995295 update_engine[1570]: I20251216 03:14:12.992479 1570 main.cc:92] Flatcar Update Engine starting Dec 16 03:14:12.994964 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:14:12.995217 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:14:12.998016 coreos-metadata[1554]: Dec 16 03:14:12.995 INFO Fetch successful Dec 16 03:14:13.007231 extend-filesystems[1558]: Resized partition /dev/vda9 Dec 16 03:14:13.015355 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:14:13.017106 tar[1575]: linux-amd64/LICENSE Dec 16 03:14:13.017106 tar[1575]: linux-amd64/helm Dec 16 03:14:13.022056 dbus-daemon[1555]: [system] SELinux support is enabled Dec 16 03:14:13.023853 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:14:13.029699 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Dec 16 03:14:13.029662 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:14:13.029695 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:14:13.030326 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:14:13.030414 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 16 03:14:13.030429 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:14:13.042186 jq[1597]: true Dec 16 03:14:13.048078 update_engine[1570]: I20251216 03:14:13.045820 1570 update_check_scheduler.cc:74] Next update check in 8m31s Dec 16 03:14:13.047494 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:14:13.064021 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:14:13.112456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:14:13.161360 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 16 03:14:13.167228 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 16 03:14:13.202849 kernel: Console: switching to colour dummy device 80x25 Dec 16 03:14:13.202941 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 03:14:13.202987 kernel: [drm] features: -context_init Dec 16 03:14:13.203003 kernel: [drm] number of scanouts: 1 Dec 16 03:14:13.203018 kernel: [drm] number of cap sets: 0 Dec 16 03:14:13.203032 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Dec 16 03:14:13.213860 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 16 03:14:13.221426 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 03:14:13.221779 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:14:13.246444 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 03:14:13.246444 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 16 03:14:13.246444 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 16 03:14:13.246996 extend-filesystems[1558]: Resized filesystem in /dev/vda9 Dec 16 03:14:13.247981 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:14:13.248278 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:14:13.279118 systemd-logind[1566]: New seat seat0. Dec 16 03:14:13.285079 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:14:13.341244 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:14:13.342887 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:14:13.360583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:14:13.372351 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 03:14:13.373810 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 03:14:13.375243 systemd[1]: Starting sshkeys.service... Dec 16 03:14:13.417613 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:14:13.432815 containerd[1591]: time="2025-12-16T03:14:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:14:13.437204 containerd[1591]: time="2025-12-16T03:14:13.435143082Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:14:13.481737 containerd[1591]: time="2025-12-16T03:14:13.481679027Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.521µs" Dec 16 03:14:13.481737 containerd[1591]: time="2025-12-16T03:14:13.481725650Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:14:13.481913 containerd[1591]: time="2025-12-16T03:14:13.481770441Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:14:13.481963 containerd[1591]: time="2025-12-16T03:14:13.481944172Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:14:13.482152 containerd[1591]: time="2025-12-16T03:14:13.482131947Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:14:13.482194 containerd[1591]: time="2025-12-16T03:14:13.482165315Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482244 containerd[1591]: time="2025-12-16T03:14:13.482228969Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482267 containerd[1591]: time="2025-12-16T03:14:13.482246341Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482500 containerd[1591]: time="2025-12-16T03:14:13.482480470Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482526 containerd[1591]: time="2025-12-16T03:14:13.482500187Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482526 containerd[1591]: time="2025-12-16T03:14:13.482512726Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482526 containerd[1591]: time="2025-12-16T03:14:13.482520674Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482681 containerd[1591]: time="2025-12-16T03:14:13.482665513Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482708 containerd[1591]: time="2025-12-16T03:14:13.482680320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:14:13.482767 containerd[1591]: time="2025-12-16T03:14:13.482752279Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:14:13.483048 containerd[1591]: time="2025-12-16T03:14:13.483028141Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:14:13.483082 containerd[1591]: time="2025-12-16T03:14:13.483064099Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:14:13.483082 containerd[1591]: time="2025-12-16T03:14:13.483074064Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:14:13.483136 containerd[1591]: time="2025-12-16T03:14:13.483118291Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:14:13.483438 containerd[1591]: time="2025-12-16T03:14:13.483418519Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:14:13.483503 containerd[1591]: time="2025-12-16T03:14:13.483490806Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:14:13.486135 containerd[1591]: time="2025-12-16T03:14:13.486096544Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:14:13.486218 containerd[1591]: time="2025-12-16T03:14:13.486160898Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:14:13.486325 containerd[1591]: time="2025-12-16T03:14:13.486307233Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:14:13.486351 containerd[1591]: time="2025-12-16T03:14:13.486325629Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:14:13.486372 containerd[1591]: time="2025-12-16T03:14:13.486346165Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:14:13.486372 containerd[1591]: time="2025-12-16T03:14:13.486361972Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:14:13.486531 containerd[1591]: time="2025-12-16T03:14:13.486373594Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:14:13.486531 containerd[1591]: time="2025-12-16T03:14:13.486383232Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:14:13.486531 containerd[1591]: time="2025-12-16T03:14:13.486394268Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:14:13.486531 containerd[1591]: time="2025-12-16T03:14:13.486406468Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:14:13.486531 containerd[1591]: time="2025-12-16T03:14:13.486416472Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:14:13.486531 containerd[1591]: time="2025-12-16T03:14:13.486428956Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:14:13.486531 containerd[1591]: time="2025-12-16T03:14:13.486446222Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:14:13.486531 containerd[1591]: time="2025-12-16T03:14:13.486458246Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486560640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486578327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486591808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486603721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486613807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486623167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486635931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486646405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486660021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:14:13.486683 containerd[1591]: time="2025-12-16T03:14:13.486675674Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:14:13.489229 containerd[1591]: time="2025-12-16T03:14:13.486689958Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:14:13.489229 containerd[1591]: time="2025-12-16T03:14:13.486736186Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:14:13.489229 containerd[1591]: time="2025-12-16T03:14:13.486825944Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:14:13.489229 containerd[1591]: time="2025-12-16T03:14:13.486841711Z" level=info msg="Start snapshots syncer" Dec 16 03:14:13.489229 containerd[1591]: time="2025-12-16T03:14:13.486875734Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:14:13.489347 containerd[1591]: time="2025-12-16T03:14:13.487249654Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:14:13.489347 containerd[1591]: time="2025-12-16T03:14:13.487300370Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487366659Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487494054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487513976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487524651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487534823Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487548188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487558778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487568425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487578471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487588965Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487629972Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487643900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:14:13.489535 containerd[1591]: time="2025-12-16T03:14:13.487653298Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487663724Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487671273Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487748276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487763634Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487779639Z" level=info msg="runtime interface created" Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487819664Z" level=info msg="created NRI interface" Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487830099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487842649Z" level=info msg="Connect containerd service" Dec 16 03:14:13.489813 containerd[1591]: time="2025-12-16T03:14:13.487863485Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:14:13.499507 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 03:14:13.501341 containerd[1591]: time="2025-12-16T03:14:13.501300262Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:14:13.514382 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 03:14:13.597400 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 03:14:13.605222 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 03:14:13.605346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:14:13.605667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:14:13.605806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:14:13.608939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:14:13.655413 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:14:13.722052 containerd[1591]: time="2025-12-16T03:14:13.721245643Z" level=info msg="Start subscribing containerd event" Dec 16 03:14:13.722052 containerd[1591]: time="2025-12-16T03:14:13.721420465Z" level=info msg="Start recovering state" Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.722649817Z" level=info msg="Start event monitor" Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.722699424Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.722712864Z" level=info msg="Start streaming server" Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.722725546Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.722737468Z" level=info msg="runtime interface starting up..." Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.722747125Z" level=info msg="starting plugins..." Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.722768445Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.723359021Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.723611829Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:14:13.726968 containerd[1591]: time="2025-12-16T03:14:13.723879664Z" level=info msg="containerd successfully booted in 0.291766s" Dec 16 03:14:13.729972 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:14:13.749947 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:14:13.768942 coreos-metadata[1656]: Dec 16 03:14:13.768 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 16 03:14:13.794963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:14:13.801118 coreos-metadata[1656]: Dec 16 03:14:13.799 INFO Fetch successful Dec 16 03:14:13.815211 systemd-networkd[1498]: eth0: Gained IPv6LL Dec 16 03:14:13.816420 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Dec 16 03:14:13.822501 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:14:13.824263 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:14:13.825617 unknown[1656]: wrote ssh authorized keys file for user: core Dec 16 03:14:13.828920 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:14:13.834689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:13.840974 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:14:13.870320 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:14:13.877198 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:14:13.880091 update-ssh-keys[1684]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:14:13.880332 systemd[1]: Started sshd@0-146.190.151.166:22-147.75.109.163:41716.service - OpenSSH per-connection server daemon (147.75.109.163:41716). Dec 16 03:14:13.885081 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 03:14:13.891086 systemd[1]: Finished sshkeys.service. Dec 16 03:14:13.906294 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:14:13.931457 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:14:13.931817 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:14:13.939137 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:14:13.998847 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:14:14.006777 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:14:14.013690 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:14:14.017009 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:14:14.053512 kernel: EDAC MC: Ver: 3.0.0 Dec 16 03:14:14.091818 sshd[1691]: Accepted publickey for core from 147.75.109.163 port 41716 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:14.100134 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:14.111711 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:14:14.114755 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:14:14.128632 systemd-logind[1566]: New session 1 of user core. Dec 16 03:14:14.160642 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:14:14.169929 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:14:14.202641 (systemd)[1716]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:14.210116 systemd-logind[1566]: New session 2 of user core. Dec 16 03:14:14.370190 tar[1575]: linux-amd64/README.md Dec 16 03:14:14.399606 systemd[1716]: Queued start job for default target default.target. Dec 16 03:14:14.400224 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:14:14.404248 systemd[1716]: Created slice app.slice - User Application Slice. Dec 16 03:14:14.404712 systemd[1716]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:14:14.404897 systemd[1716]: Reached target paths.target - Paths. Dec 16 03:14:14.405101 systemd[1716]: Reached target timers.target - Timers. Dec 16 03:14:14.407677 systemd[1716]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:14:14.408874 systemd[1716]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:14:14.439153 systemd[1716]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:14:14.442046 systemd[1716]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:14:14.442175 systemd[1716]: Reached target sockets.target - Sockets. Dec 16 03:14:14.442231 systemd[1716]: Reached target basic.target - Basic System. Dec 16 03:14:14.442271 systemd[1716]: Reached target default.target - Main User Target. Dec 16 03:14:14.442303 systemd[1716]: Startup finished in 217ms. Dec 16 03:14:14.442665 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:14:14.451092 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:14:14.454041 systemd-networkd[1498]: eth1: Gained IPv6LL Dec 16 03:14:14.454486 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Dec 16 03:14:14.491223 systemd[1]: Started sshd@1-146.190.151.166:22-147.75.109.163:41732.service - OpenSSH per-connection server daemon (147.75.109.163:41732). Dec 16 03:14:14.593734 sshd[1733]: Accepted publickey for core from 147.75.109.163 port 41732 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:14.595674 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:14.605720 systemd-logind[1566]: New session 3 of user core. Dec 16 03:14:14.610141 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:14:14.642559 sshd[1737]: Connection closed by 147.75.109.163 port 41732 Dec 16 03:14:14.642333 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Dec 16 03:14:14.658654 systemd[1]: sshd@1-146.190.151.166:22-147.75.109.163:41732.service: Deactivated successfully. Dec 16 03:14:14.662548 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:14:14.664129 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:14:14.669960 systemd[1]: Started sshd@2-146.190.151.166:22-147.75.109.163:41742.service - OpenSSH per-connection server daemon (147.75.109.163:41742). Dec 16 03:14:14.673395 systemd-logind[1566]: Removed session 3. Dec 16 03:14:14.766357 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 41742 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:14.769027 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:14.779067 systemd-logind[1566]: New session 4 of user core. Dec 16 03:14:14.784099 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:14:14.814421 sshd[1747]: Connection closed by 147.75.109.163 port 41742 Dec 16 03:14:14.816038 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Dec 16 03:14:14.821115 systemd[1]: sshd@2-146.190.151.166:22-147.75.109.163:41742.service: Deactivated successfully. Dec 16 03:14:14.824100 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:14:14.826274 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:14:14.827862 systemd-logind[1566]: Removed session 4. Dec 16 03:14:15.234341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:15.238771 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:14:15.240525 systemd[1]: Startup finished in 2.972s (kernel) + 8.011s (initrd) + 5.675s (userspace) = 16.659s. Dec 16 03:14:15.246241 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:14:15.888962 kubelet[1757]: E1216 03:14:15.888885 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:14:15.891272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:14:15.891422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:14:15.892057 systemd[1]: kubelet.service: Consumed 1.199s CPU time, 266.7M memory peak. Dec 16 03:14:24.841367 systemd[1]: Started sshd@3-146.190.151.166:22-147.75.109.163:41370.service - OpenSSH per-connection server daemon (147.75.109.163:41370). Dec 16 03:14:24.930226 sshd[1769]: Accepted publickey for core from 147.75.109.163 port 41370 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:24.932092 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:24.938173 systemd-logind[1566]: New session 5 of user core. Dec 16 03:14:24.948101 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:14:24.969903 sshd[1773]: Connection closed by 147.75.109.163 port 41370 Dec 16 03:14:24.970515 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Dec 16 03:14:24.985181 systemd[1]: sshd@3-146.190.151.166:22-147.75.109.163:41370.service: Deactivated successfully. Dec 16 03:14:24.987611 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:14:24.988564 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:14:24.992123 systemd[1]: Started sshd@4-146.190.151.166:22-147.75.109.163:41384.service - OpenSSH per-connection server daemon (147.75.109.163:41384). Dec 16 03:14:24.993496 systemd-logind[1566]: Removed session 5. Dec 16 03:14:25.070357 sshd[1779]: Accepted publickey for core from 147.75.109.163 port 41384 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:25.072463 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:25.079308 systemd-logind[1566]: New session 6 of user core. Dec 16 03:14:25.088100 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:14:25.106372 sshd[1783]: Connection closed by 147.75.109.163 port 41384 Dec 16 03:14:25.105497 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Dec 16 03:14:25.117293 systemd[1]: sshd@4-146.190.151.166:22-147.75.109.163:41384.service: Deactivated successfully. Dec 16 03:14:25.119697 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:14:25.120806 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:14:25.124331 systemd[1]: Started sshd@5-146.190.151.166:22-147.75.109.163:41400.service - OpenSSH per-connection server daemon (147.75.109.163:41400). Dec 16 03:14:25.125887 systemd-logind[1566]: Removed session 6. Dec 16 03:14:25.207811 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 41400 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:25.209776 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:25.216105 systemd-logind[1566]: New session 7 of user core. Dec 16 03:14:25.227112 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:14:25.247818 sshd[1793]: Connection closed by 147.75.109.163 port 41400 Dec 16 03:14:25.247411 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Dec 16 03:14:25.262862 systemd[1]: sshd@5-146.190.151.166:22-147.75.109.163:41400.service: Deactivated successfully. Dec 16 03:14:25.264754 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:14:25.265819 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:14:25.268895 systemd[1]: Started sshd@6-146.190.151.166:22-147.75.109.163:41414.service - OpenSSH per-connection server daemon (147.75.109.163:41414). Dec 16 03:14:25.269495 systemd-logind[1566]: Removed session 7. Dec 16 03:14:25.339433 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 41414 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:25.340986 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:25.347857 systemd-logind[1566]: New session 8 of user core. Dec 16 03:14:25.357205 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:14:25.395509 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:14:25.395901 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:14:25.406252 sudo[1804]: pam_unix(sudo:session): session closed for user root Dec 16 03:14:25.410611 sshd[1803]: Connection closed by 147.75.109.163 port 41414 Dec 16 03:14:25.411086 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Dec 16 03:14:25.429302 systemd[1]: sshd@6-146.190.151.166:22-147.75.109.163:41414.service: Deactivated successfully. Dec 16 03:14:25.431641 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:14:25.432610 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:14:25.436605 systemd[1]: Started sshd@7-146.190.151.166:22-147.75.109.163:41424.service - OpenSSH per-connection server daemon (147.75.109.163:41424). Dec 16 03:14:25.437481 systemd-logind[1566]: Removed session 8. Dec 16 03:14:25.504556 sshd[1811]: Accepted publickey for core from 147.75.109.163 port 41424 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:25.506632 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:25.514001 systemd-logind[1566]: New session 9 of user core. Dec 16 03:14:25.519148 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:14:25.538904 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:14:25.539293 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:14:25.542478 sudo[1817]: pam_unix(sudo:session): session closed for user root Dec 16 03:14:25.553536 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:14:25.554091 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:14:25.566539 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:14:25.612000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:14:25.614064 augenrules[1841]: No rules Dec 16 03:14:25.615856 kernel: kauditd_printk_skb: 156 callbacks suppressed Dec 16 03:14:25.615981 kernel: audit: type=1305 audit(1765854865.612:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:14:25.616007 kernel: audit: type=1300 audit(1765854865.612:228): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd816d3320 a2=420 a3=0 items=0 ppid=1822 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:25.612000 audit[1841]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd816d3320 a2=420 a3=0 items=0 ppid=1822 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:25.615316 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:14:25.615636 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:14:25.612000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:14:25.619779 sudo[1816]: pam_unix(sudo:session): session closed for user root Dec 16 03:14:25.620889 kernel: audit: type=1327 audit(1765854865.612:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:14:25.620967 kernel: audit: type=1130 audit(1765854865.614:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.625861 kernel: audit: type=1131 audit(1765854865.614:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.628729 kernel: audit: type=1106 audit(1765854865.618:231): pid=1816 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.618000 audit[1816]: USER_END pid=1816 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.628046 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Dec 16 03:14:25.629011 sshd[1815]: Connection closed by 147.75.109.163 port 41424 Dec 16 03:14:25.618000 audit[1816]: CRED_DISP pid=1816 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.631824 kernel: audit: type=1104 audit(1765854865.618:232): pid=1816 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.632000 audit[1811]: USER_END pid=1811 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:14:25.632000 audit[1811]: CRED_DISP pid=1811 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:14:25.638949 kernel: audit: type=1106 audit(1765854865.632:233): pid=1811 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:14:25.639057 kernel: audit: type=1104 audit(1765854865.632:234): pid=1811 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:14:25.640263 systemd[1]: sshd@7-146.190.151.166:22-147.75.109.163:41424.service: Deactivated successfully. Dec 16 03:14:25.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-146.190.151.166:22-147.75.109.163:41424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.642992 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:14:25.643844 kernel: audit: type=1131 audit(1765854865.639:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-146.190.151.166:22-147.75.109.163:41424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.646024 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:14:25.648997 systemd[1]: Started sshd@8-146.190.151.166:22-147.75.109.163:41430.service - OpenSSH per-connection server daemon (147.75.109.163:41430). Dec 16 03:14:25.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-146.190.151.166:22-147.75.109.163:41430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.652036 systemd-logind[1566]: Removed session 9. Dec 16 03:14:25.721000 audit[1850]: USER_ACCT pid=1850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:14:25.722911 sshd[1850]: Accepted publickey for core from 147.75.109.163 port 41430 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:14:25.723000 audit[1850]: CRED_ACQ pid=1850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:14:25.723000 audit[1850]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6c1ea150 a2=3 a3=0 items=0 ppid=1 pid=1850 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:25.723000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:14:25.724497 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:14:25.731109 systemd-logind[1566]: New session 10 of user core. Dec 16 03:14:25.735019 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:14:25.738000 audit[1850]: USER_START pid=1850 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:14:25.740000 audit[1854]: CRED_ACQ pid=1854 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:14:25.755777 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:14:25.756206 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:14:25.754000 audit[1855]: USER_ACCT pid=1855 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.755000 audit[1855]: CRED_REFR pid=1855 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:25.755000 audit[1855]: USER_START pid=1855 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:26.087296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:14:26.092332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:26.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:26.272547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:26.287416 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:14:26.330079 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:14:26.341808 (dockerd)[1888]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:14:26.366970 kubelet[1882]: E1216 03:14:26.366713 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:14:26.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:14:26.372354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:14:26.372501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:14:26.373206 systemd[1]: kubelet.service: Consumed 213ms CPU time, 110M memory peak. Dec 16 03:14:26.714042 dockerd[1888]: time="2025-12-16T03:14:26.711808894Z" level=info msg="Starting up" Dec 16 03:14:26.714945 dockerd[1888]: time="2025-12-16T03:14:26.714537258Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:14:26.732439 dockerd[1888]: time="2025-12-16T03:14:26.732372933Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:14:26.758450 systemd[1]: var-lib-docker-metacopy\x2dcheck651851296-merged.mount: Deactivated successfully. Dec 16 03:14:26.793959 dockerd[1888]: time="2025-12-16T03:14:26.793893413Z" level=info msg="Loading containers: start." Dec 16 03:14:26.804846 kernel: Initializing XFRM netlink socket Dec 16 03:14:26.887000 audit[1939]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.887000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc0c7c1ef0 a2=0 a3=0 items=0 ppid=1888 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.887000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:14:26.890000 audit[1941]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.890000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff619ea380 a2=0 a3=0 items=0 ppid=1888 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:14:26.892000 audit[1943]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.892000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd19320d30 a2=0 a3=0 items=0 ppid=1888 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.892000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:14:26.896000 audit[1945]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.896000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd30123fd0 a2=0 a3=0 items=0 ppid=1888 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:14:26.899000 audit[1947]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.899000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce3a69310 a2=0 a3=0 items=0 ppid=1888 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:14:26.902000 audit[1949]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.902000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc8f0ef940 a2=0 a3=0 items=0 ppid=1888 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:14:26.908000 audit[1951]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.908000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff34a63310 a2=0 a3=0 items=0 ppid=1888 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:14:26.911000 audit[1953]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.911000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc554ff370 a2=0 a3=0 items=0 ppid=1888 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:14:26.953000 audit[1956]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.953000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffc46cb650 a2=0 a3=0 items=0 ppid=1888 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.953000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:14:26.956000 audit[1958]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.956000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc21cf4bc0 a2=0 a3=0 items=0 ppid=1888 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:14:26.959000 audit[1960]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.959000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffed2921ab0 a2=0 a3=0 items=0 ppid=1888 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:14:26.962000 audit[1962]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.962000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff037c70b0 a2=0 a3=0 items=0 ppid=1888 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:14:26.965000 audit[1964]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:26.965000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd039849e0 a2=0 a3=0 items=0 ppid=1888 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:26.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:14:27.014000 audit[1994]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.014000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc60cc7a40 a2=0 a3=0 items=0 ppid=1888 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.014000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:14:27.017000 audit[1996]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.017000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd5a2151d0 a2=0 a3=0 items=0 ppid=1888 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.017000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:14:27.020000 audit[1998]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.020000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff1e05610 a2=0 a3=0 items=0 ppid=1888 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:14:27.022000 audit[2000]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.022000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe89a53e90 a2=0 a3=0 items=0 ppid=1888 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.022000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:14:27.025000 audit[2002]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.025000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0ae3ff50 a2=0 a3=0 items=0 ppid=1888 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.025000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:14:27.027000 audit[2004]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.027000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcce4e15e0 a2=0 a3=0 items=0 ppid=1888 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.027000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:14:27.030000 audit[2006]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.030000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe19006c90 a2=0 a3=0 items=0 ppid=1888 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.030000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:14:27.033000 audit[2008]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.033000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd203234e0 a2=0 a3=0 items=0 ppid=1888 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:14:27.035000 audit[2010]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.035000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe38ecc4a0 a2=0 a3=0 items=0 ppid=1888 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.035000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:14:27.039000 audit[2012]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.039000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe9cefbe40 a2=0 a3=0 items=0 ppid=1888 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:14:27.042000 audit[2014]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.042000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd03fb0890 a2=0 a3=0 items=0 ppid=1888 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.042000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:14:27.045000 audit[2016]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.045000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff69c34d10 a2=0 a3=0 items=0 ppid=1888 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:14:27.049000 audit[2018]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.049000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcbbde9a50 a2=0 a3=0 items=0 ppid=1888 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:14:27.056000 audit[2023]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.056000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffe22362b0 a2=0 a3=0 items=0 ppid=1888 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:14:27.059000 audit[2025]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.059000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd5c9ea800 a2=0 a3=0 items=0 ppid=1888 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.059000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:14:27.062000 audit[2027]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.062000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdcc6fa870 a2=0 a3=0 items=0 ppid=1888 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:14:27.065000 audit[2029]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.065000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc5cd0e80 a2=0 a3=0 items=0 ppid=1888 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.065000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:14:27.068000 audit[2031]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.068000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd303f73d0 a2=0 a3=0 items=0 ppid=1888 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:14:27.071000 audit[2033]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:27.071000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcecaa2cd0 a2=0 a3=0 items=0 ppid=1888 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:14:27.078659 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Dec 16 03:14:27.099655 systemd-timesyncd[1458]: Contacted time server 23.95.35.34:123 (2.flatcar.pool.ntp.org). Dec 16 03:14:27.099000 audit[2038]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.100960 systemd-timesyncd[1458]: Initial clock synchronization to Tue 2025-12-16 03:14:27.415195 UTC. Dec 16 03:14:27.099000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc56cf8f20 a2=0 a3=0 items=0 ppid=1888 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:14:27.107000 audit[2040]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.107000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffc93c0590 a2=0 a3=0 items=0 ppid=1888 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.107000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:14:27.124000 audit[2048]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.124000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff6a4c5de0 a2=0 a3=0 items=0 ppid=1888 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:14:27.138000 audit[2054]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.138000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffed7f40550 a2=0 a3=0 items=0 ppid=1888 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.138000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:14:27.142000 audit[2056]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.142000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc7f241e30 a2=0 a3=0 items=0 ppid=1888 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:14:27.145000 audit[2058]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.145000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd49559ee0 a2=0 a3=0 items=0 ppid=1888 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.145000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:14:27.149000 audit[2060]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.149000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe8c26a060 a2=0 a3=0 items=0 ppid=1888 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:14:27.153000 audit[2062]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:27.153000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe5724b500 a2=0 a3=0 items=0 ppid=1888 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:27.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:14:27.154542 systemd-networkd[1498]: docker0: Link UP Dec 16 03:14:27.159279 dockerd[1888]: time="2025-12-16T03:14:27.159124311Z" level=info msg="Loading containers: done." Dec 16 03:14:27.181024 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3024141078-merged.mount: Deactivated successfully. Dec 16 03:14:27.182151 dockerd[1888]: time="2025-12-16T03:14:27.182100437Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:14:27.182280 dockerd[1888]: time="2025-12-16T03:14:27.182219366Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:14:27.182331 dockerd[1888]: time="2025-12-16T03:14:27.182315299Z" level=info msg="Initializing buildkit" Dec 16 03:14:27.207594 dockerd[1888]: time="2025-12-16T03:14:27.207544287Z" level=info msg="Completed buildkit initialization" Dec 16 03:14:27.215991 dockerd[1888]: time="2025-12-16T03:14:27.215622934Z" level=info msg="Daemon has completed initialization" Dec 16 03:14:27.215991 dockerd[1888]: time="2025-12-16T03:14:27.215739230Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:14:27.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:27.216574 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:14:28.167574 containerd[1591]: time="2025-12-16T03:14:28.167192929Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 03:14:29.009433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount548641911.mount: Deactivated successfully. Dec 16 03:14:30.653575 containerd[1591]: time="2025-12-16T03:14:30.653516429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:30.654829 containerd[1591]: time="2025-12-16T03:14:30.654666239Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 16 03:14:30.655453 containerd[1591]: time="2025-12-16T03:14:30.655423357Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:30.657993 containerd[1591]: time="2025-12-16T03:14:30.657941361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:30.659507 containerd[1591]: time="2025-12-16T03:14:30.658911376Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.491677494s" Dec 16 03:14:30.659507 containerd[1591]: time="2025-12-16T03:14:30.658948552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 03:14:30.659842 containerd[1591]: time="2025-12-16T03:14:30.659815866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 03:14:32.897652 containerd[1591]: time="2025-12-16T03:14:32.897584738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:32.899474 containerd[1591]: time="2025-12-16T03:14:32.898890739Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 03:14:32.899474 containerd[1591]: time="2025-12-16T03:14:32.898953255Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:32.902642 containerd[1591]: time="2025-12-16T03:14:32.902597947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:32.903932 containerd[1591]: time="2025-12-16T03:14:32.903885464Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.24398883s" Dec 16 03:14:32.904024 containerd[1591]: time="2025-12-16T03:14:32.903936510Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 03:14:32.904692 containerd[1591]: time="2025-12-16T03:14:32.904604005Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 03:14:33.260694 systemd-resolved[1273]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 16 03:14:34.551845 containerd[1591]: time="2025-12-16T03:14:34.551068362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:34.553578 containerd[1591]: time="2025-12-16T03:14:34.553535185Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 03:14:34.554418 containerd[1591]: time="2025-12-16T03:14:34.554381732Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:34.556874 containerd[1591]: time="2025-12-16T03:14:34.556785093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:34.557931 containerd[1591]: time="2025-12-16T03:14:34.557887681Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.652543449s" Dec 16 03:14:34.558077 containerd[1591]: time="2025-12-16T03:14:34.558061776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 03:14:34.559154 containerd[1591]: time="2025-12-16T03:14:34.559107016Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 03:14:35.899602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193586437.mount: Deactivated successfully. Dec 16 03:14:36.342056 systemd-resolved[1273]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 16 03:14:36.531788 containerd[1591]: time="2025-12-16T03:14:36.531192223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:36.532915 containerd[1591]: time="2025-12-16T03:14:36.532881506Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 16 03:14:36.533664 containerd[1591]: time="2025-12-16T03:14:36.533633315Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:36.535619 containerd[1591]: time="2025-12-16T03:14:36.535591598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:36.536801 containerd[1591]: time="2025-12-16T03:14:36.536730793Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.977587456s" Dec 16 03:14:36.536801 containerd[1591]: time="2025-12-16T03:14:36.536761376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 03:14:36.537426 containerd[1591]: time="2025-12-16T03:14:36.537404517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 03:14:36.587363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:14:36.589343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:36.758783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:36.764022 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 03:14:36.764141 kernel: audit: type=1130 audit(1765854876.758:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:36.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:36.770247 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:14:36.825609 kubelet[2189]: E1216 03:14:36.825551 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:14:36.828926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:14:36.829158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:14:36.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:14:36.829868 systemd[1]: kubelet.service: Consumed 186ms CPU time, 110.9M memory peak. Dec 16 03:14:36.832829 kernel: audit: type=1131 audit(1765854876.828:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:14:37.131949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959823771.mount: Deactivated successfully. Dec 16 03:14:38.092817 containerd[1591]: time="2025-12-16T03:14:38.091802806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:38.093273 containerd[1591]: time="2025-12-16T03:14:38.092840915Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 16 03:14:38.093502 containerd[1591]: time="2025-12-16T03:14:38.093471492Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:38.099738 containerd[1591]: time="2025-12-16T03:14:38.099689542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:38.101309 containerd[1591]: time="2025-12-16T03:14:38.101250412Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.563561065s" Dec 16 03:14:38.101309 containerd[1591]: time="2025-12-16T03:14:38.101309013Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 03:14:38.102250 containerd[1591]: time="2025-12-16T03:14:38.102227179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:14:38.567124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985797229.mount: Deactivated successfully. Dec 16 03:14:38.572270 containerd[1591]: time="2025-12-16T03:14:38.571552579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:14:38.572270 containerd[1591]: time="2025-12-16T03:14:38.572232704Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:14:38.572766 containerd[1591]: time="2025-12-16T03:14:38.572742446Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:14:38.575890 containerd[1591]: time="2025-12-16T03:14:38.575846630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:14:38.576127 containerd[1591]: time="2025-12-16T03:14:38.576013908Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.658229ms" Dec 16 03:14:38.576172 containerd[1591]: time="2025-12-16T03:14:38.576135764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:14:38.576746 containerd[1591]: time="2025-12-16T03:14:38.576726282Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 03:14:39.149216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578877971.mount: Deactivated successfully. Dec 16 03:14:42.726168 containerd[1591]: time="2025-12-16T03:14:42.726102402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:42.728049 containerd[1591]: time="2025-12-16T03:14:42.728001709Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Dec 16 03:14:42.728925 containerd[1591]: time="2025-12-16T03:14:42.728874761Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:42.733718 containerd[1591]: time="2025-12-16T03:14:42.733581844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:42.735344 containerd[1591]: time="2025-12-16T03:14:42.735053053Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.157867216s" Dec 16 03:14:42.735344 containerd[1591]: time="2025-12-16T03:14:42.735114468Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 03:14:45.293093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:45.293628 systemd[1]: kubelet.service: Consumed 186ms CPU time, 110.9M memory peak. Dec 16 03:14:45.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:45.297918 kernel: audit: type=1130 audit(1765854885.291:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:45.302278 kernel: audit: type=1131 audit(1765854885.291:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:45.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:45.301137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:45.334765 systemd[1]: Reload requested from client PID 2336 ('systemctl') (unit session-10.scope)... Dec 16 03:14:45.334810 systemd[1]: Reloading... Dec 16 03:14:45.467835 zram_generator::config[2382]: No configuration found. Dec 16 03:14:45.737909 systemd[1]: Reloading finished in 402 ms. Dec 16 03:14:45.775538 kernel: audit: type=1334 audit(1765854885.769:292): prog-id=61 op=LOAD Dec 16 03:14:45.775646 kernel: audit: type=1334 audit(1765854885.769:293): prog-id=50 op=UNLOAD Dec 16 03:14:45.769000 audit: BPF prog-id=61 op=LOAD Dec 16 03:14:45.769000 audit: BPF prog-id=50 op=UNLOAD Dec 16 03:14:45.770000 audit: BPF prog-id=62 op=LOAD Dec 16 03:14:45.779816 kernel: audit: type=1334 audit(1765854885.770:294): prog-id=62 op=LOAD Dec 16 03:14:45.770000 audit: BPF prog-id=63 op=LOAD Dec 16 03:14:45.781802 kernel: audit: type=1334 audit(1765854885.770:295): prog-id=63 op=LOAD Dec 16 03:14:45.770000 audit: BPF prog-id=51 op=UNLOAD Dec 16 03:14:45.783800 kernel: audit: type=1334 audit(1765854885.770:296): prog-id=51 op=UNLOAD Dec 16 03:14:45.770000 audit: BPF prog-id=52 op=UNLOAD Dec 16 03:14:45.786537 kernel: audit: type=1334 audit(1765854885.770:297): prog-id=52 op=UNLOAD Dec 16 03:14:45.786605 kernel: audit: type=1334 audit(1765854885.771:298): prog-id=64 op=LOAD Dec 16 03:14:45.771000 audit: BPF prog-id=64 op=LOAD Dec 16 03:14:45.787763 kernel: audit: type=1334 audit(1765854885.771:299): prog-id=47 op=UNLOAD Dec 16 03:14:45.771000 audit: BPF prog-id=47 op=UNLOAD Dec 16 03:14:45.771000 audit: BPF prog-id=65 op=LOAD Dec 16 03:14:45.771000 audit: BPF prog-id=66 op=LOAD Dec 16 03:14:45.771000 audit: BPF prog-id=48 op=UNLOAD Dec 16 03:14:45.771000 audit: BPF prog-id=49 op=UNLOAD Dec 16 03:14:45.771000 audit: BPF prog-id=67 op=LOAD Dec 16 03:14:45.771000 audit: BPF prog-id=56 op=UNLOAD Dec 16 03:14:45.775000 audit: BPF prog-id=68 op=LOAD Dec 16 03:14:45.775000 audit: BPF prog-id=58 op=UNLOAD Dec 16 03:14:45.776000 audit: BPF prog-id=69 op=LOAD Dec 16 03:14:45.776000 audit: BPF prog-id=70 op=LOAD Dec 16 03:14:45.776000 audit: BPF prog-id=59 op=UNLOAD Dec 16 03:14:45.776000 audit: BPF prog-id=60 op=UNLOAD Dec 16 03:14:45.776000 audit: BPF prog-id=71 op=LOAD Dec 16 03:14:45.776000 audit: BPF prog-id=57 op=UNLOAD Dec 16 03:14:45.777000 audit: BPF prog-id=72 op=LOAD Dec 16 03:14:45.777000 audit: BPF prog-id=73 op=LOAD Dec 16 03:14:45.777000 audit: BPF prog-id=54 op=UNLOAD Dec 16 03:14:45.777000 audit: BPF prog-id=55 op=UNLOAD Dec 16 03:14:45.780000 audit: BPF prog-id=74 op=LOAD Dec 16 03:14:45.780000 audit: BPF prog-id=53 op=UNLOAD Dec 16 03:14:45.780000 audit: BPF prog-id=75 op=LOAD Dec 16 03:14:45.780000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:14:45.780000 audit: BPF prog-id=76 op=LOAD Dec 16 03:14:45.782000 audit: BPF prog-id=77 op=LOAD Dec 16 03:14:45.782000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:14:45.782000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:14:45.782000 audit: BPF prog-id=78 op=LOAD Dec 16 03:14:45.782000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:14:45.782000 audit: BPF prog-id=79 op=LOAD Dec 16 03:14:45.782000 audit: BPF prog-id=80 op=LOAD Dec 16 03:14:45.782000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:14:45.782000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:14:45.817426 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:14:45.817522 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:14:45.817953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:45.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:14:45.818023 systemd[1]: kubelet.service: Consumed 117ms CPU time, 98.5M memory peak. Dec 16 03:14:45.819924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:46.005210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:46.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:46.025260 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:14:46.114136 kubelet[2436]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:14:46.114136 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:14:46.114136 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:14:46.122611 kubelet[2436]: I1216 03:14:46.121660 2436 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:14:47.079125 kubelet[2436]: I1216 03:14:47.079069 2436 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:14:47.079380 kubelet[2436]: I1216 03:14:47.079360 2436 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:14:47.079893 kubelet[2436]: I1216 03:14:47.079866 2436 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:14:47.106936 kubelet[2436]: I1216 03:14:47.106893 2436 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:14:47.109638 kubelet[2436]: E1216 03:14:47.109054 2436 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://146.190.151.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.151.166:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 03:14:47.129303 kubelet[2436]: I1216 03:14:47.129252 2436 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:14:47.137568 kubelet[2436]: I1216 03:14:47.137516 2436 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:14:47.139422 kubelet[2436]: I1216 03:14:47.139327 2436 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:14:47.142881 kubelet[2436]: I1216 03:14:47.139416 2436 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-7-1189c174c4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:14:47.142881 kubelet[2436]: I1216 03:14:47.142883 2436 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:14:47.142881 kubelet[2436]: I1216 03:14:47.142902 2436 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:14:47.144437 kubelet[2436]: I1216 03:14:47.144352 2436 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:14:47.147475 kubelet[2436]: I1216 03:14:47.147112 2436 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:14:47.147475 kubelet[2436]: I1216 03:14:47.147155 2436 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:14:47.147475 kubelet[2436]: I1216 03:14:47.147187 2436 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:14:47.149230 kubelet[2436]: I1216 03:14:47.148833 2436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:14:47.159571 kubelet[2436]: I1216 03:14:47.159542 2436 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:14:47.160246 kubelet[2436]: I1216 03:14:47.160222 2436 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:14:47.161075 kubelet[2436]: W1216 03:14:47.161051 2436 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:14:47.167417 kubelet[2436]: E1216 03:14:47.167132 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://146.190.151.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.151.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:14:47.167417 kubelet[2436]: E1216 03:14:47.167255 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://146.190.151.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-7-1189c174c4&limit=500&resourceVersion=0\": dial tcp 146.190.151.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:14:47.171402 kubelet[2436]: I1216 03:14:47.171331 2436 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:14:47.171590 kubelet[2436]: I1216 03:14:47.171565 2436 server.go:1289] "Started kubelet" Dec 16 03:14:47.173778 kubelet[2436]: I1216 03:14:47.173363 2436 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:14:47.176016 kubelet[2436]: I1216 03:14:47.175688 2436 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:14:47.178273 kubelet[2436]: I1216 03:14:47.178016 2436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:14:47.184000 audit[2451]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:47.184000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeea53e920 a2=0 a3=0 items=0 ppid=2436 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.184000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:14:47.185000 audit[2452]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:47.185000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc13c8df40 a2=0 a3=0 items=0 ppid=2436 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:14:47.189280 kubelet[2436]: I1216 03:14:47.188599 2436 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:14:47.189730 kubelet[2436]: E1216 03:14:47.180695 2436 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.151.166:6443/api/v1/namespaces/default/events\": dial tcp 146.190.151.166:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-7-1189c174c4.188193a9fabca57b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-7-1189c174c4,UID:ci-4547.0.0-7-1189c174c4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-7-1189c174c4,},FirstTimestamp:2025-12-16 03:14:47.171368315 +0000 UTC m=+1.139695083,LastTimestamp:2025-12-16 03:14:47.171368315 +0000 UTC m=+1.139695083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-7-1189c174c4,}" Dec 16 03:14:47.189730 kubelet[2436]: I1216 03:14:47.189649 2436 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:14:47.193166 kubelet[2436]: I1216 03:14:47.192769 2436 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:14:47.193365 kubelet[2436]: I1216 03:14:47.193339 2436 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:14:47.195235 kubelet[2436]: I1216 03:14:47.195210 2436 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:14:47.195344 kubelet[2436]: I1216 03:14:47.195325 2436 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:14:47.198040 kubelet[2436]: E1216 03:14:47.197069 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://146.190.151.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.151.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:14:47.198040 kubelet[2436]: E1216 03:14:47.197812 2436 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:14:47.199615 kubelet[2436]: E1216 03:14:47.199579 2436 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-7-1189c174c4\" not found" Dec 16 03:14:47.200193 kubelet[2436]: I1216 03:14:47.200167 2436 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:14:47.200479 kubelet[2436]: E1216 03:14:47.200443 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.151.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-7-1189c174c4?timeout=10s\": dial tcp 146.190.151.166:6443: connect: connection refused" interval="200ms" Dec 16 03:14:47.201309 kubelet[2436]: I1216 03:14:47.201290 2436 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:14:47.201309 kubelet[2436]: I1216 03:14:47.201305 2436 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:14:47.200000 audit[2454]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:47.200000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe7fd2ed60 a2=0 a3=0 items=0 ppid=2436 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:14:47.204000 audit[2456]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:47.204000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff24be70c0 a2=0 a3=0 items=0 ppid=2436 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:14:47.213000 audit[2459]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:47.213000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffccf826ae0 a2=0 a3=0 items=0 ppid=2436 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:14:47.215050 kubelet[2436]: I1216 03:14:47.215015 2436 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:14:47.215000 audit[2460]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:47.215000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff46f4e1e0 a2=0 a3=0 items=0 ppid=2436 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.215000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:14:47.216867 kubelet[2436]: I1216 03:14:47.216848 2436 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:14:47.216943 kubelet[2436]: I1216 03:14:47.216936 2436 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:14:47.217004 kubelet[2436]: I1216 03:14:47.216997 2436 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:14:47.217054 kubelet[2436]: I1216 03:14:47.217048 2436 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:14:47.217157 kubelet[2436]: E1216 03:14:47.217140 2436 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:14:47.217000 audit[2461]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:47.217000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea5e77610 a2=0 a3=0 items=0 ppid=2436 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:14:47.219000 audit[2463]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:47.219000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3b282680 a2=0 a3=0 items=0 ppid=2436 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:14:47.221000 audit[2464]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:47.221000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5c4b9a50 a2=0 a3=0 items=0 ppid=2436 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:14:47.222000 audit[2465]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:47.222000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffebb380e0 a2=0 a3=0 items=0 ppid=2436 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:14:47.224000 audit[2466]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:47.224000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8741b060 a2=0 a3=0 items=0 ppid=2436 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:14:47.225000 audit[2467]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:47.225000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff02bb9590 a2=0 a3=0 items=0 ppid=2436 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:14:47.227406 kubelet[2436]: E1216 03:14:47.227304 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://146.190.151.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.151.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:14:47.234616 kubelet[2436]: I1216 03:14:47.234541 2436 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:14:47.234616 kubelet[2436]: I1216 03:14:47.234568 2436 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:14:47.234616 kubelet[2436]: I1216 03:14:47.234602 2436 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:14:47.238271 kubelet[2436]: I1216 03:14:47.238202 2436 policy_none.go:49] "None policy: Start" Dec 16 03:14:47.238271 kubelet[2436]: I1216 03:14:47.238246 2436 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:14:47.238271 kubelet[2436]: I1216 03:14:47.238263 2436 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:14:47.244964 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:14:47.255938 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:14:47.260855 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:14:47.275349 kubelet[2436]: E1216 03:14:47.275317 2436 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:14:47.275677 kubelet[2436]: I1216 03:14:47.275660 2436 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:14:47.275818 kubelet[2436]: I1216 03:14:47.275772 2436 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:14:47.276120 kubelet[2436]: I1216 03:14:47.276105 2436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:14:47.277703 kubelet[2436]: E1216 03:14:47.277678 2436 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:14:47.277813 kubelet[2436]: E1216 03:14:47.277723 2436 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-7-1189c174c4\" not found" Dec 16 03:14:47.330971 systemd[1]: Created slice kubepods-burstable-pod854de2fbb8b305147786f19fcce2f987.slice - libcontainer container kubepods-burstable-pod854de2fbb8b305147786f19fcce2f987.slice. Dec 16 03:14:47.352145 kubelet[2436]: E1216 03:14:47.352075 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.353691 systemd[1]: Created slice kubepods-burstable-pod7b549725374ee46be68981bf071ba358.slice - libcontainer container kubepods-burstable-pod7b549725374ee46be68981bf071ba358.slice. Dec 16 03:14:47.367395 kubelet[2436]: E1216 03:14:47.366985 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.371117 systemd[1]: Created slice kubepods-burstable-pod4442420bdb3cd3973298aa6e3bfba933.slice - libcontainer container kubepods-burstable-pod4442420bdb3cd3973298aa6e3bfba933.slice. Dec 16 03:14:47.373984 kubelet[2436]: E1216 03:14:47.373933 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.378036 kubelet[2436]: I1216 03:14:47.377977 2436 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.378538 kubelet[2436]: E1216 03:14:47.378501 2436 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.151.166:6443/api/v1/nodes\": dial tcp 146.190.151.166:6443: connect: connection refused" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397337 kubelet[2436]: I1216 03:14:47.396996 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397337 kubelet[2436]: I1216 03:14:47.397053 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397337 kubelet[2436]: I1216 03:14:47.397089 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b549725374ee46be68981bf071ba358-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-7-1189c174c4\" (UID: \"7b549725374ee46be68981bf071ba358\") " pod="kube-system/kube-scheduler-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397337 kubelet[2436]: I1216 03:14:47.397113 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4442420bdb3cd3973298aa6e3bfba933-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" (UID: \"4442420bdb3cd3973298aa6e3bfba933\") " pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397337 kubelet[2436]: I1216 03:14:47.397139 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4442420bdb3cd3973298aa6e3bfba933-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" (UID: \"4442420bdb3cd3973298aa6e3bfba933\") " pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397686 kubelet[2436]: I1216 03:14:47.397180 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397686 kubelet[2436]: I1216 03:14:47.397206 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4442420bdb3cd3973298aa6e3bfba933-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" (UID: \"4442420bdb3cd3973298aa6e3bfba933\") " pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397686 kubelet[2436]: I1216 03:14:47.397228 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.397686 kubelet[2436]: I1216 03:14:47.397273 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.401628 kubelet[2436]: E1216 03:14:47.401581 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.151.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-7-1189c174c4?timeout=10s\": dial tcp 146.190.151.166:6443: connect: connection refused" interval="400ms" Dec 16 03:14:47.580019 kubelet[2436]: I1216 03:14:47.579975 2436 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.580341 kubelet[2436]: E1216 03:14:47.580314 2436 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.151.166:6443/api/v1/nodes\": dial tcp 146.190.151.166:6443: connect: connection refused" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.653139 kubelet[2436]: E1216 03:14:47.653032 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:47.653920 containerd[1591]: time="2025-12-16T03:14:47.653834117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-7-1189c174c4,Uid:854de2fbb8b305147786f19fcce2f987,Namespace:kube-system,Attempt:0,}" Dec 16 03:14:47.668576 kubelet[2436]: E1216 03:14:47.668414 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:47.669465 containerd[1591]: time="2025-12-16T03:14:47.669319733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-7-1189c174c4,Uid:7b549725374ee46be68981bf071ba358,Namespace:kube-system,Attempt:0,}" Dec 16 03:14:47.675858 kubelet[2436]: E1216 03:14:47.674993 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:47.688833 containerd[1591]: time="2025-12-16T03:14:47.688594538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-7-1189c174c4,Uid:4442420bdb3cd3973298aa6e3bfba933,Namespace:kube-system,Attempt:0,}" Dec 16 03:14:47.775228 containerd[1591]: time="2025-12-16T03:14:47.775167022Z" level=info msg="connecting to shim c0c6fd96e584d3960b2e5970a794ca47d0075e17ab45d00f544091e2902c0494" address="unix:///run/containerd/s/0fb75f6b0119531a9e2be42cc36b3d03c3373617ae76aafcefcec736a50adec0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:47.776819 containerd[1591]: time="2025-12-16T03:14:47.776758150Z" level=info msg="connecting to shim c9f2aa7ffb2437c9157dcf9af8314a766c94850961d1c26f0365e9c492cd1412" address="unix:///run/containerd/s/28ceb6249c140c65a70cde0e341718db5a8e64b629d9fc77fa68112da6ea3a26" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:47.781519 containerd[1591]: time="2025-12-16T03:14:47.781463751Z" level=info msg="connecting to shim 0015742eb71942c706b8ecc02c4d4d7a2c4a4dbca18a51d94c58f0aaeff817f2" address="unix:///run/containerd/s/4c55f75158294c9f769349a254f5772c5e0435fedec5b17ebdf7f3f22fd1a2bc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:47.802253 kubelet[2436]: E1216 03:14:47.802215 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.151.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-7-1189c174c4?timeout=10s\": dial tcp 146.190.151.166:6443: connect: connection refused" interval="800ms" Dec 16 03:14:47.877199 systemd[1]: Started cri-containerd-c9f2aa7ffb2437c9157dcf9af8314a766c94850961d1c26f0365e9c492cd1412.scope - libcontainer container c9f2aa7ffb2437c9157dcf9af8314a766c94850961d1c26f0365e9c492cd1412. Dec 16 03:14:47.886337 systemd[1]: Started cri-containerd-0015742eb71942c706b8ecc02c4d4d7a2c4a4dbca18a51d94c58f0aaeff817f2.scope - libcontainer container 0015742eb71942c706b8ecc02c4d4d7a2c4a4dbca18a51d94c58f0aaeff817f2. Dec 16 03:14:47.890403 systemd[1]: Started cri-containerd-c0c6fd96e584d3960b2e5970a794ca47d0075e17ab45d00f544091e2902c0494.scope - libcontainer container c0c6fd96e584d3960b2e5970a794ca47d0075e17ab45d00f544091e2902c0494. Dec 16 03:14:47.914000 audit: BPF prog-id=81 op=LOAD Dec 16 03:14:47.915000 audit: BPF prog-id=82 op=LOAD Dec 16 03:14:47.915000 audit[2531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2508 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030313537343265623731393432633730366238656363303263346434 Dec 16 03:14:47.915000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:14:47.915000 audit[2531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030313537343265623731393432633730366238656363303263346434 Dec 16 03:14:47.920000 audit: BPF prog-id=83 op=LOAD Dec 16 03:14:47.920000 audit[2531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2508 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030313537343265623731393432633730366238656363303263346434 Dec 16 03:14:47.920000 audit: BPF prog-id=84 op=LOAD Dec 16 03:14:47.920000 audit[2531]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2508 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030313537343265623731393432633730366238656363303263346434 Dec 16 03:14:47.920000 audit: BPF prog-id=84 op=UNLOAD Dec 16 03:14:47.920000 audit[2531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030313537343265623731393432633730366238656363303263346434 Dec 16 03:14:47.920000 audit: BPF prog-id=83 op=UNLOAD Dec 16 03:14:47.920000 audit[2531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030313537343265623731393432633730366238656363303263346434 Dec 16 03:14:47.921000 audit: BPF prog-id=85 op=LOAD Dec 16 03:14:47.921000 audit[2531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2508 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030313537343265623731393432633730366238656363303263346434 Dec 16 03:14:47.923000 audit: BPF prog-id=86 op=LOAD Dec 16 03:14:47.924000 audit: BPF prog-id=87 op=LOAD Dec 16 03:14:47.924000 audit[2535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2498 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339663261613766666232343337633931353764636639616638333134 Dec 16 03:14:47.924000 audit: BPF prog-id=87 op=UNLOAD Dec 16 03:14:47.924000 audit[2535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339663261613766666232343337633931353764636639616638333134 Dec 16 03:14:47.925000 audit: BPF prog-id=88 op=LOAD Dec 16 03:14:47.925000 audit[2535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2498 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339663261613766666232343337633931353764636639616638333134 Dec 16 03:14:47.926000 audit: BPF prog-id=89 op=LOAD Dec 16 03:14:47.926000 audit[2535]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2498 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339663261613766666232343337633931353764636639616638333134 Dec 16 03:14:47.926000 audit: BPF prog-id=89 op=UNLOAD Dec 16 03:14:47.926000 audit[2535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339663261613766666232343337633931353764636639616638333134 Dec 16 03:14:47.926000 audit: BPF prog-id=88 op=UNLOAD Dec 16 03:14:47.926000 audit[2535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339663261613766666232343337633931353764636639616638333134 Dec 16 03:14:47.927000 audit: BPF prog-id=90 op=LOAD Dec 16 03:14:47.928000 audit: BPF prog-id=91 op=LOAD Dec 16 03:14:47.928000 audit[2515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2494 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633666643936653538346433393630623265353937306137393463 Dec 16 03:14:47.928000 audit: BPF prog-id=91 op=UNLOAD Dec 16 03:14:47.928000 audit[2515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633666643936653538346433393630623265353937306137393463 Dec 16 03:14:47.929000 audit: BPF prog-id=92 op=LOAD Dec 16 03:14:47.926000 audit: BPF prog-id=93 op=LOAD Dec 16 03:14:47.929000 audit[2515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2494 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.926000 audit[2535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2498 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633666643936653538346433393630623265353937306137393463 Dec 16 03:14:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339663261613766666232343337633931353764636639616638333134 Dec 16 03:14:47.929000 audit: BPF prog-id=94 op=LOAD Dec 16 03:14:47.929000 audit[2515]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2494 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633666643936653538346433393630623265353937306137393463 Dec 16 03:14:47.929000 audit: BPF prog-id=94 op=UNLOAD Dec 16 03:14:47.929000 audit[2515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633666643936653538346433393630623265353937306137393463 Dec 16 03:14:47.930000 audit: BPF prog-id=92 op=UNLOAD Dec 16 03:14:47.930000 audit[2515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633666643936653538346433393630623265353937306137393463 Dec 16 03:14:47.930000 audit: BPF prog-id=95 op=LOAD Dec 16 03:14:47.930000 audit[2515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2494 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330633666643936653538346433393630623265353937306137393463 Dec 16 03:14:47.970447 containerd[1591]: time="2025-12-16T03:14:47.970402604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-7-1189c174c4,Uid:4442420bdb3cd3973298aa6e3bfba933,Namespace:kube-system,Attempt:0,} returns sandbox id \"0015742eb71942c706b8ecc02c4d4d7a2c4a4dbca18a51d94c58f0aaeff817f2\"" Dec 16 03:14:47.973298 kubelet[2436]: E1216 03:14:47.973265 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:47.981948 kubelet[2436]: I1216 03:14:47.981089 2436 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.981948 kubelet[2436]: E1216 03:14:47.981513 2436 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.151.166:6443/api/v1/nodes\": dial tcp 146.190.151.166:6443: connect: connection refused" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:47.982393 containerd[1591]: time="2025-12-16T03:14:47.982347301Z" level=info msg="CreateContainer within sandbox \"0015742eb71942c706b8ecc02c4d4d7a2c4a4dbca18a51d94c58f0aaeff817f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:14:47.996407 containerd[1591]: time="2025-12-16T03:14:47.996072291Z" level=info msg="Container 63e7398aa9d79f732ec638fc5b97e0ba5ad707f67638c4ffeeabb7d87df29b56: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:47.998730 containerd[1591]: time="2025-12-16T03:14:47.998580246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-7-1189c174c4,Uid:854de2fbb8b305147786f19fcce2f987,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0c6fd96e584d3960b2e5970a794ca47d0075e17ab45d00f544091e2902c0494\"" Dec 16 03:14:48.000097 kubelet[2436]: E1216 03:14:47.999999 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:48.003756 containerd[1591]: time="2025-12-16T03:14:48.003697213Z" level=info msg="CreateContainer within sandbox \"c0c6fd96e584d3960b2e5970a794ca47d0075e17ab45d00f544091e2902c0494\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:14:48.013268 containerd[1591]: time="2025-12-16T03:14:48.013197477Z" level=info msg="CreateContainer within sandbox \"0015742eb71942c706b8ecc02c4d4d7a2c4a4dbca18a51d94c58f0aaeff817f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"63e7398aa9d79f732ec638fc5b97e0ba5ad707f67638c4ffeeabb7d87df29b56\"" Dec 16 03:14:48.014474 containerd[1591]: time="2025-12-16T03:14:48.014387336Z" level=info msg="StartContainer for \"63e7398aa9d79f732ec638fc5b97e0ba5ad707f67638c4ffeeabb7d87df29b56\"" Dec 16 03:14:48.018834 containerd[1591]: time="2025-12-16T03:14:48.018729753Z" level=info msg="connecting to shim 63e7398aa9d79f732ec638fc5b97e0ba5ad707f67638c4ffeeabb7d87df29b56" address="unix:///run/containerd/s/4c55f75158294c9f769349a254f5772c5e0435fedec5b17ebdf7f3f22fd1a2bc" protocol=ttrpc version=3 Dec 16 03:14:48.029049 containerd[1591]: time="2025-12-16T03:14:48.028964240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-7-1189c174c4,Uid:7b549725374ee46be68981bf071ba358,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9f2aa7ffb2437c9157dcf9af8314a766c94850961d1c26f0365e9c492cd1412\"" Dec 16 03:14:48.030767 kubelet[2436]: E1216 03:14:48.030724 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:48.033420 containerd[1591]: time="2025-12-16T03:14:48.033376703Z" level=info msg="Container 40113e5adbebca2f813f4c4778425f87b590b3cb31e9fce141cb6b636ba46cf3: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:48.034439 containerd[1591]: time="2025-12-16T03:14:48.034331314Z" level=info msg="CreateContainer within sandbox \"c9f2aa7ffb2437c9157dcf9af8314a766c94850961d1c26f0365e9c492cd1412\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:14:48.047578 containerd[1591]: time="2025-12-16T03:14:48.047495501Z" level=info msg="CreateContainer within sandbox \"c0c6fd96e584d3960b2e5970a794ca47d0075e17ab45d00f544091e2902c0494\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40113e5adbebca2f813f4c4778425f87b590b3cb31e9fce141cb6b636ba46cf3\"" Dec 16 03:14:48.048969 containerd[1591]: time="2025-12-16T03:14:48.048200733Z" level=info msg="StartContainer for \"40113e5adbebca2f813f4c4778425f87b590b3cb31e9fce141cb6b636ba46cf3\"" Dec 16 03:14:48.050242 containerd[1591]: time="2025-12-16T03:14:48.050199342Z" level=info msg="Container b8ad781b29fa58c832b59a8c99a33bd56948b8c2c607151b0742b95640e32230: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:48.051403 containerd[1591]: time="2025-12-16T03:14:48.051372844Z" level=info msg="connecting to shim 40113e5adbebca2f813f4c4778425f87b590b3cb31e9fce141cb6b636ba46cf3" address="unix:///run/containerd/s/0fb75f6b0119531a9e2be42cc36b3d03c3373617ae76aafcefcec736a50adec0" protocol=ttrpc version=3 Dec 16 03:14:48.053078 systemd[1]: Started cri-containerd-63e7398aa9d79f732ec638fc5b97e0ba5ad707f67638c4ffeeabb7d87df29b56.scope - libcontainer container 63e7398aa9d79f732ec638fc5b97e0ba5ad707f67638c4ffeeabb7d87df29b56. Dec 16 03:14:48.063351 containerd[1591]: time="2025-12-16T03:14:48.063282699Z" level=info msg="CreateContainer within sandbox \"c9f2aa7ffb2437c9157dcf9af8314a766c94850961d1c26f0365e9c492cd1412\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b8ad781b29fa58c832b59a8c99a33bd56948b8c2c607151b0742b95640e32230\"" Dec 16 03:14:48.065010 containerd[1591]: time="2025-12-16T03:14:48.064980173Z" level=info msg="StartContainer for \"b8ad781b29fa58c832b59a8c99a33bd56948b8c2c607151b0742b95640e32230\"" Dec 16 03:14:48.068893 containerd[1591]: time="2025-12-16T03:14:48.068777442Z" level=info msg="connecting to shim b8ad781b29fa58c832b59a8c99a33bd56948b8c2c607151b0742b95640e32230" address="unix:///run/containerd/s/28ceb6249c140c65a70cde0e341718db5a8e64b629d9fc77fa68112da6ea3a26" protocol=ttrpc version=3 Dec 16 03:14:48.073835 kubelet[2436]: E1216 03:14:48.073800 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://146.190.151.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.151.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:14:48.084867 systemd[1]: Started cri-containerd-40113e5adbebca2f813f4c4778425f87b590b3cb31e9fce141cb6b636ba46cf3.scope - libcontainer container 40113e5adbebca2f813f4c4778425f87b590b3cb31e9fce141cb6b636ba46cf3. Dec 16 03:14:48.089000 audit: BPF prog-id=96 op=LOAD Dec 16 03:14:48.089000 audit: BPF prog-id=97 op=LOAD Dec 16 03:14:48.089000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2508 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653733393861613964373966373332656336333866633562393765 Dec 16 03:14:48.090000 audit: BPF prog-id=97 op=UNLOAD Dec 16 03:14:48.090000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653733393861613964373966373332656336333866633562393765 Dec 16 03:14:48.090000 audit: BPF prog-id=98 op=LOAD Dec 16 03:14:48.090000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2508 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653733393861613964373966373332656336333866633562393765 Dec 16 03:14:48.090000 audit: BPF prog-id=99 op=LOAD Dec 16 03:14:48.090000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2508 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653733393861613964373966373332656336333866633562393765 Dec 16 03:14:48.090000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:14:48.090000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653733393861613964373966373332656336333866633562393765 Dec 16 03:14:48.090000 audit: BPF prog-id=98 op=UNLOAD Dec 16 03:14:48.090000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2508 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653733393861613964373966373332656336333866633562393765 Dec 16 03:14:48.091000 audit: BPF prog-id=100 op=LOAD Dec 16 03:14:48.091000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2508 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653733393861613964373966373332656336333866633562393765 Dec 16 03:14:48.105135 systemd[1]: Started cri-containerd-b8ad781b29fa58c832b59a8c99a33bd56948b8c2c607151b0742b95640e32230.scope - libcontainer container b8ad781b29fa58c832b59a8c99a33bd56948b8c2c607151b0742b95640e32230. Dec 16 03:14:48.120000 audit: BPF prog-id=101 op=LOAD Dec 16 03:14:48.121000 audit: BPF prog-id=102 op=LOAD Dec 16 03:14:48.121000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2498 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238616437383162323966613538633833326235396138633939613333 Dec 16 03:14:48.121000 audit: BPF prog-id=102 op=UNLOAD Dec 16 03:14:48.121000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238616437383162323966613538633833326235396138633939613333 Dec 16 03:14:48.122000 audit: BPF prog-id=103 op=LOAD Dec 16 03:14:48.122000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2498 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238616437383162323966613538633833326235396138633939613333 Dec 16 03:14:48.122000 audit: BPF prog-id=104 op=LOAD Dec 16 03:14:48.122000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2498 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238616437383162323966613538633833326235396138633939613333 Dec 16 03:14:48.122000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:14:48.122000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238616437383162323966613538633833326235396138633939613333 Dec 16 03:14:48.122000 audit: BPF prog-id=103 op=UNLOAD Dec 16 03:14:48.122000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238616437383162323966613538633833326235396138633939613333 Dec 16 03:14:48.122000 audit: BPF prog-id=105 op=LOAD Dec 16 03:14:48.122000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2498 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238616437383162323966613538633833326235396138633939613333 Dec 16 03:14:48.148000 audit: BPF prog-id=106 op=LOAD Dec 16 03:14:48.150000 audit: BPF prog-id=107 op=LOAD Dec 16 03:14:48.150000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2494 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430313133653561646265626361326638313366346334373738343235 Dec 16 03:14:48.152000 audit: BPF prog-id=107 op=UNLOAD Dec 16 03:14:48.152000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430313133653561646265626361326638313366346334373738343235 Dec 16 03:14:48.152000 audit: BPF prog-id=108 op=LOAD Dec 16 03:14:48.152000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2494 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430313133653561646265626361326638313366346334373738343235 Dec 16 03:14:48.152000 audit: BPF prog-id=109 op=LOAD Dec 16 03:14:48.152000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2494 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430313133653561646265626361326638313366346334373738343235 Dec 16 03:14:48.152000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:14:48.152000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430313133653561646265626361326638313366346334373738343235 Dec 16 03:14:48.153000 audit: BPF prog-id=108 op=UNLOAD Dec 16 03:14:48.153000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430313133653561646265626361326638313366346334373738343235 Dec 16 03:14:48.153000 audit: BPF prog-id=110 op=LOAD Dec 16 03:14:48.153000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2494 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:48.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430313133653561646265626361326638313366346334373738343235 Dec 16 03:14:48.164461 containerd[1591]: time="2025-12-16T03:14:48.164127917Z" level=info msg="StartContainer for \"63e7398aa9d79f732ec638fc5b97e0ba5ad707f67638c4ffeeabb7d87df29b56\" returns successfully" Dec 16 03:14:48.185172 kubelet[2436]: E1216 03:14:48.184819 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://146.190.151.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-7-1189c174c4&limit=500&resourceVersion=0\": dial tcp 146.190.151.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:14:48.213350 containerd[1591]: time="2025-12-16T03:14:48.212770523Z" level=info msg="StartContainer for \"b8ad781b29fa58c832b59a8c99a33bd56948b8c2c607151b0742b95640e32230\" returns successfully" Dec 16 03:14:48.248471 kubelet[2436]: E1216 03:14:48.248408 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:48.250079 kubelet[2436]: E1216 03:14:48.249861 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:48.252186 containerd[1591]: time="2025-12-16T03:14:48.252145815Z" level=info msg="StartContainer for \"40113e5adbebca2f813f4c4778425f87b590b3cb31e9fce141cb6b636ba46cf3\" returns successfully" Dec 16 03:14:48.257057 kubelet[2436]: E1216 03:14:48.256989 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:48.257354 kubelet[2436]: E1216 03:14:48.257333 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:48.260428 kubelet[2436]: E1216 03:14:48.260184 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:48.260428 kubelet[2436]: E1216 03:14:48.260336 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:48.338631 kubelet[2436]: E1216 03:14:48.338584 2436 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://146.190.151.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.151.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:14:48.783585 kubelet[2436]: I1216 03:14:48.782996 2436 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:49.268859 kubelet[2436]: E1216 03:14:49.268823 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:49.269917 kubelet[2436]: E1216 03:14:49.269018 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:49.269917 kubelet[2436]: E1216 03:14:49.269525 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:49.269917 kubelet[2436]: E1216 03:14:49.269658 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:49.269917 kubelet[2436]: E1216 03:14:49.269703 2436 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:49.269917 kubelet[2436]: E1216 03:14:49.269859 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:50.370924 kubelet[2436]: E1216 03:14:50.370883 2436 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-7-1189c174c4\" not found" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:50.401685 kubelet[2436]: E1216 03:14:50.401544 2436 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4547.0.0-7-1189c174c4.188193a9fabca57b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-7-1189c174c4,UID:ci-4547.0.0-7-1189c174c4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-7-1189c174c4,},FirstTimestamp:2025-12-16 03:14:47.171368315 +0000 UTC m=+1.139695083,LastTimestamp:2025-12-16 03:14:47.171368315 +0000 UTC m=+1.139695083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-7-1189c174c4,}" Dec 16 03:14:50.470931 kubelet[2436]: I1216 03:14:50.470844 2436 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:50.501551 kubelet[2436]: I1216 03:14:50.501306 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:50.505316 kubelet[2436]: E1216 03:14:50.505199 2436 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4547.0.0-7-1189c174c4.188193a9fc4f9702 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-7-1189c174c4,UID:ci-4547.0.0-7-1189c174c4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-7-1189c174c4,},FirstTimestamp:2025-12-16 03:14:47.197775618 +0000 UTC m=+1.166102384,LastTimestamp:2025-12-16 03:14:47.197775618 +0000 UTC m=+1.166102384,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-7-1189c174c4,}" Dec 16 03:14:50.532603 kubelet[2436]: E1216 03:14:50.532507 2436 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:50.532880 kubelet[2436]: I1216 03:14:50.532695 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:50.541571 kubelet[2436]: E1216 03:14:50.541495 2436 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-7-1189c174c4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:50.541571 kubelet[2436]: I1216 03:14:50.541541 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:50.547105 kubelet[2436]: E1216 03:14:50.547057 2436 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:51.166422 kubelet[2436]: I1216 03:14:51.166366 2436 apiserver.go:52] "Watching apiserver" Dec 16 03:14:51.196398 kubelet[2436]: I1216 03:14:51.196348 2436 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:14:52.532397 kubelet[2436]: I1216 03:14:52.532216 2436 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:52.548969 kubelet[2436]: I1216 03:14:52.548895 2436 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:14:52.549267 kubelet[2436]: E1216 03:14:52.549242 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:53.006698 systemd[1]: Reload requested from client PID 2718 ('systemctl') (unit session-10.scope)... Dec 16 03:14:53.006835 systemd[1]: Reloading... Dec 16 03:14:53.134847 zram_generator::config[2761]: No configuration found. Dec 16 03:14:53.275552 kubelet[2436]: E1216 03:14:53.275364 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:53.606457 systemd[1]: Reloading finished in 599 ms. Dec 16 03:14:53.645423 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:53.662273 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:14:53.663128 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:53.665814 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 03:14:53.665996 kernel: audit: type=1131 audit(1765854893.662:394): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:53.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:53.669536 systemd[1]: kubelet.service: Consumed 1.586s CPU time, 127.3M memory peak. Dec 16 03:14:53.675321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:53.675000 audit: BPF prog-id=111 op=LOAD Dec 16 03:14:53.678839 kernel: audit: type=1334 audit(1765854893.675:395): prog-id=111 op=LOAD Dec 16 03:14:53.678000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:14:53.685117 kernel: audit: type=1334 audit(1765854893.678:396): prog-id=78 op=UNLOAD Dec 16 03:14:53.678000 audit: BPF prog-id=112 op=LOAD Dec 16 03:14:53.688841 kernel: audit: type=1334 audit(1765854893.678:397): prog-id=112 op=LOAD Dec 16 03:14:53.678000 audit: BPF prog-id=113 op=LOAD Dec 16 03:14:53.678000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:14:53.692210 kernel: audit: type=1334 audit(1765854893.678:398): prog-id=113 op=LOAD Dec 16 03:14:53.692327 kernel: audit: type=1334 audit(1765854893.678:399): prog-id=79 op=UNLOAD Dec 16 03:14:53.678000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:14:53.694890 kernel: audit: type=1334 audit(1765854893.678:400): prog-id=80 op=UNLOAD Dec 16 03:14:53.695039 kernel: audit: type=1334 audit(1765854893.684:401): prog-id=114 op=LOAD Dec 16 03:14:53.684000 audit: BPF prog-id=114 op=LOAD Dec 16 03:14:53.684000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:14:53.701848 kernel: audit: type=1334 audit(1765854893.684:402): prog-id=68 op=UNLOAD Dec 16 03:14:53.684000 audit: BPF prog-id=115 op=LOAD Dec 16 03:14:53.703871 kernel: audit: type=1334 audit(1765854893.684:403): prog-id=115 op=LOAD Dec 16 03:14:53.685000 audit: BPF prog-id=116 op=LOAD Dec 16 03:14:53.685000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:14:53.685000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:14:53.697000 audit: BPF prog-id=117 op=LOAD Dec 16 03:14:53.697000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:14:53.697000 audit: BPF prog-id=118 op=LOAD Dec 16 03:14:53.697000 audit: BPF prog-id=119 op=LOAD Dec 16 03:14:53.697000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:14:53.697000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:14:53.699000 audit: BPF prog-id=120 op=LOAD Dec 16 03:14:53.699000 audit: BPF prog-id=64 op=UNLOAD Dec 16 03:14:53.699000 audit: BPF prog-id=121 op=LOAD Dec 16 03:14:53.699000 audit: BPF prog-id=122 op=LOAD Dec 16 03:14:53.699000 audit: BPF prog-id=65 op=UNLOAD Dec 16 03:14:53.699000 audit: BPF prog-id=66 op=UNLOAD Dec 16 03:14:53.702000 audit: BPF prog-id=123 op=LOAD Dec 16 03:14:53.702000 audit: BPF prog-id=61 op=UNLOAD Dec 16 03:14:53.702000 audit: BPF prog-id=124 op=LOAD Dec 16 03:14:53.702000 audit: BPF prog-id=125 op=LOAD Dec 16 03:14:53.702000 audit: BPF prog-id=62 op=UNLOAD Dec 16 03:14:53.702000 audit: BPF prog-id=63 op=UNLOAD Dec 16 03:14:53.704000 audit: BPF prog-id=126 op=LOAD Dec 16 03:14:53.704000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:14:53.704000 audit: BPF prog-id=127 op=LOAD Dec 16 03:14:53.704000 audit: BPF prog-id=128 op=LOAD Dec 16 03:14:53.705000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:14:53.705000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:14:53.706000 audit: BPF prog-id=129 op=LOAD Dec 16 03:14:53.706000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:14:53.707000 audit: BPF prog-id=130 op=LOAD Dec 16 03:14:53.707000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:14:53.908982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:53.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:53.925357 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:14:54.000066 kubelet[2815]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:14:54.000066 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:14:54.000066 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:14:54.000066 kubelet[2815]: I1216 03:14:53.999870 2815 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:14:54.019836 kubelet[2815]: I1216 03:14:54.018834 2815 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:14:54.019836 kubelet[2815]: I1216 03:14:54.018883 2815 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:14:54.019836 kubelet[2815]: I1216 03:14:54.019241 2815 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:14:54.022974 kubelet[2815]: I1216 03:14:54.022928 2815 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 03:14:54.031904 kubelet[2815]: I1216 03:14:54.031832 2815 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:14:54.039554 kubelet[2815]: I1216 03:14:54.039502 2815 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:14:54.048450 kubelet[2815]: I1216 03:14:54.048389 2815 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:14:54.049000 kubelet[2815]: I1216 03:14:54.048805 2815 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:14:54.049194 kubelet[2815]: I1216 03:14:54.048851 2815 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-7-1189c174c4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:14:54.049194 kubelet[2815]: I1216 03:14:54.049186 2815 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:14:54.049194 kubelet[2815]: I1216 03:14:54.049201 2815 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:14:54.050057 kubelet[2815]: I1216 03:14:54.049285 2815 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:14:54.050057 kubelet[2815]: I1216 03:14:54.049546 2815 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:14:54.050057 kubelet[2815]: I1216 03:14:54.049573 2815 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:14:54.050057 kubelet[2815]: I1216 03:14:54.049627 2815 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:14:54.050057 kubelet[2815]: I1216 03:14:54.049646 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:14:54.065333 kubelet[2815]: I1216 03:14:54.063777 2815 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:14:54.065333 kubelet[2815]: I1216 03:14:54.064734 2815 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:14:54.069588 kubelet[2815]: I1216 03:14:54.069560 2815 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:14:54.069776 kubelet[2815]: I1216 03:14:54.069766 2815 server.go:1289] "Started kubelet" Dec 16 03:14:54.072743 kubelet[2815]: I1216 03:14:54.072544 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:14:54.084255 kubelet[2815]: I1216 03:14:54.084209 2815 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:14:54.108669 kubelet[2815]: I1216 03:14:54.108631 2815 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:14:54.110121 kubelet[2815]: I1216 03:14:54.089975 2815 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:14:54.120125 kubelet[2815]: I1216 03:14:54.084734 2815 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:14:54.120125 kubelet[2815]: I1216 03:14:54.120476 2815 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:14:54.120710 kubelet[2815]: E1216 03:14:54.092354 2815 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-7-1189c174c4\" not found" Dec 16 03:14:54.120710 kubelet[2815]: I1216 03:14:54.107135 2815 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:14:54.121493 kubelet[2815]: I1216 03:14:54.121201 2815 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:14:54.121493 kubelet[2815]: I1216 03:14:54.092114 2815 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:14:54.124145 kubelet[2815]: I1216 03:14:54.092157 2815 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:14:54.126435 kubelet[2815]: I1216 03:14:54.124584 2815 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:14:54.133230 kubelet[2815]: I1216 03:14:54.133203 2815 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:14:54.137245 kubelet[2815]: E1216 03:14:54.137216 2815 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:14:54.151306 kubelet[2815]: I1216 03:14:54.150999 2815 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:14:54.163893 kubelet[2815]: I1216 03:14:54.163469 2815 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:14:54.163893 kubelet[2815]: I1216 03:14:54.163523 2815 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:14:54.163893 kubelet[2815]: I1216 03:14:54.163569 2815 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:14:54.163893 kubelet[2815]: I1216 03:14:54.163579 2815 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:14:54.163893 kubelet[2815]: E1216 03:14:54.163655 2815 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:14:54.228579 kubelet[2815]: I1216 03:14:54.228315 2815 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:14:54.228579 kubelet[2815]: I1216 03:14:54.228335 2815 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:14:54.228579 kubelet[2815]: I1216 03:14:54.228359 2815 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:14:54.228874 kubelet[2815]: I1216 03:14:54.228507 2815 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:14:54.228975 kubelet[2815]: I1216 03:14:54.228941 2815 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:14:54.229328 kubelet[2815]: I1216 03:14:54.229137 2815 policy_none.go:49] "None policy: Start" Dec 16 03:14:54.229328 kubelet[2815]: I1216 03:14:54.229262 2815 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:14:54.229328 kubelet[2815]: I1216 03:14:54.229281 2815 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:14:54.229759 kubelet[2815]: I1216 03:14:54.229654 2815 state_mem.go:75] "Updated machine memory state" Dec 16 03:14:54.244228 kubelet[2815]: E1216 03:14:54.244192 2815 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:14:54.245277 kubelet[2815]: I1216 03:14:54.245250 2815 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:14:54.246066 kubelet[2815]: I1216 03:14:54.245490 2815 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:14:54.246669 kubelet[2815]: I1216 03:14:54.246647 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:14:54.249648 kubelet[2815]: E1216 03:14:54.249614 2815 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:14:54.267485 kubelet[2815]: I1216 03:14:54.267353 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.274022 kubelet[2815]: I1216 03:14:54.273854 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.277158 kubelet[2815]: I1216 03:14:54.274304 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.295138 kubelet[2815]: I1216 03:14:54.293418 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:14:54.297846 kubelet[2815]: I1216 03:14:54.297680 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:14:54.303843 kubelet[2815]: I1216 03:14:54.303804 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:14:54.305141 kubelet[2815]: E1216 03:14:54.304941 2815 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.366231 kubelet[2815]: I1216 03:14:54.365407 2815 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.392430 kubelet[2815]: I1216 03:14:54.391190 2815 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.392430 kubelet[2815]: I1216 03:14:54.391321 2815 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426564 kubelet[2815]: I1216 03:14:54.425706 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4442420bdb3cd3973298aa6e3bfba933-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" (UID: \"4442420bdb3cd3973298aa6e3bfba933\") " pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426564 kubelet[2815]: I1216 03:14:54.426006 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b549725374ee46be68981bf071ba358-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-7-1189c174c4\" (UID: \"7b549725374ee46be68981bf071ba358\") " pod="kube-system/kube-scheduler-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426564 kubelet[2815]: I1216 03:14:54.426173 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4442420bdb3cd3973298aa6e3bfba933-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" (UID: \"4442420bdb3cd3973298aa6e3bfba933\") " pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426564 kubelet[2815]: I1216 03:14:54.426302 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4442420bdb3cd3973298aa6e3bfba933-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" (UID: \"4442420bdb3cd3973298aa6e3bfba933\") " pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426564 kubelet[2815]: I1216 03:14:54.426327 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426876 kubelet[2815]: I1216 03:14:54.426347 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426876 kubelet[2815]: I1216 03:14:54.426473 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426876 kubelet[2815]: I1216 03:14:54.426489 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.426876 kubelet[2815]: I1216 03:14:54.426646 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/854de2fbb8b305147786f19fcce2f987-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-7-1189c174c4\" (UID: \"854de2fbb8b305147786f19fcce2f987\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:54.595270 kubelet[2815]: E1216 03:14:54.595025 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:54.598501 kubelet[2815]: E1216 03:14:54.598400 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:54.606159 kubelet[2815]: E1216 03:14:54.606103 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:55.059583 kubelet[2815]: I1216 03:14:55.059526 2815 apiserver.go:52] "Watching apiserver" Dec 16 03:14:55.125230 kubelet[2815]: I1216 03:14:55.124809 2815 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:14:55.213032 kubelet[2815]: I1216 03:14:55.212981 2815 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:55.213746 kubelet[2815]: E1216 03:14:55.213664 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:55.214450 kubelet[2815]: E1216 03:14:55.214380 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:55.257679 kubelet[2815]: I1216 03:14:55.257628 2815 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:14:55.257883 kubelet[2815]: E1216 03:14:55.257730 2815 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-7-1189c174c4\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" Dec 16 03:14:55.258120 kubelet[2815]: E1216 03:14:55.257971 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:55.348318 kubelet[2815]: I1216 03:14:55.347759 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-7-1189c174c4" podStartSLOduration=1.347730908 podStartE2EDuration="1.347730908s" podCreationTimestamp="2025-12-16 03:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:14:55.318396862 +0000 UTC m=+1.384176938" watchObservedRunningTime="2025-12-16 03:14:55.347730908 +0000 UTC m=+1.413511000" Dec 16 03:14:55.366557 kubelet[2815]: I1216 03:14:55.366224 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-7-1189c174c4" podStartSLOduration=3.366205263 podStartE2EDuration="3.366205263s" podCreationTimestamp="2025-12-16 03:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:14:55.348426473 +0000 UTC m=+1.414206547" watchObservedRunningTime="2025-12-16 03:14:55.366205263 +0000 UTC m=+1.431985341" Dec 16 03:14:55.366557 kubelet[2815]: I1216 03:14:55.366428 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-7-1189c174c4" podStartSLOduration=1.366416434 podStartE2EDuration="1.366416434s" podCreationTimestamp="2025-12-16 03:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:14:55.3664042 +0000 UTC m=+1.432184255" watchObservedRunningTime="2025-12-16 03:14:55.366416434 +0000 UTC m=+1.432196510" Dec 16 03:14:56.214338 kubelet[2815]: E1216 03:14:56.214294 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:56.215388 kubelet[2815]: E1216 03:14:56.215354 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:57.216418 kubelet[2815]: E1216 03:14:57.216370 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:57.413227 kubelet[2815]: I1216 03:14:57.413180 2815 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:14:57.413755 containerd[1591]: time="2025-12-16T03:14:57.413712978Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:14:57.414901 kubelet[2815]: I1216 03:14:57.414555 2815 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:14:58.425151 systemd[1]: Created slice kubepods-besteffort-pod3d18080b_3498_4d23_a082_e934abc67ea7.slice - libcontainer container kubepods-besteffort-pod3d18080b_3498_4d23_a082_e934abc67ea7.slice. Dec 16 03:14:58.455469 kubelet[2815]: I1216 03:14:58.455268 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d18080b-3498-4d23-a082-e934abc67ea7-kube-proxy\") pod \"kube-proxy-vschq\" (UID: \"3d18080b-3498-4d23-a082-e934abc67ea7\") " pod="kube-system/kube-proxy-vschq" Dec 16 03:14:58.456015 kubelet[2815]: I1216 03:14:58.455486 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d18080b-3498-4d23-a082-e934abc67ea7-xtables-lock\") pod \"kube-proxy-vschq\" (UID: \"3d18080b-3498-4d23-a082-e934abc67ea7\") " pod="kube-system/kube-proxy-vschq" Dec 16 03:14:58.456015 kubelet[2815]: I1216 03:14:58.455522 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d18080b-3498-4d23-a082-e934abc67ea7-lib-modules\") pod \"kube-proxy-vschq\" (UID: \"3d18080b-3498-4d23-a082-e934abc67ea7\") " pod="kube-system/kube-proxy-vschq" Dec 16 03:14:58.456015 kubelet[2815]: I1216 03:14:58.455538 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rr9j\" (UniqueName: \"kubernetes.io/projected/3d18080b-3498-4d23-a082-e934abc67ea7-kube-api-access-2rr9j\") pod \"kube-proxy-vschq\" (UID: \"3d18080b-3498-4d23-a082-e934abc67ea7\") " pod="kube-system/kube-proxy-vschq" Dec 16 03:14:58.582729 update_engine[1570]: I20251216 03:14:58.582641 1570 update_attempter.cc:509] Updating boot flags... Dec 16 03:14:58.736551 kubelet[2815]: E1216 03:14:58.734117 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:58.737586 containerd[1591]: time="2025-12-16T03:14:58.737451839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vschq,Uid:3d18080b-3498-4d23-a082-e934abc67ea7,Namespace:kube-system,Attempt:0,}" Dec 16 03:14:58.761907 kubelet[2815]: I1216 03:14:58.761599 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce78617b-4122-42d1-98f1-01efa5f15a58-var-lib-calico\") pod \"tigera-operator-7dcd859c48-h5txc\" (UID: \"ce78617b-4122-42d1-98f1-01efa5f15a58\") " pod="tigera-operator/tigera-operator-7dcd859c48-h5txc" Dec 16 03:14:58.761907 kubelet[2815]: I1216 03:14:58.761670 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcrgj\" (UniqueName: \"kubernetes.io/projected/ce78617b-4122-42d1-98f1-01efa5f15a58-kube-api-access-pcrgj\") pod \"tigera-operator-7dcd859c48-h5txc\" (UID: \"ce78617b-4122-42d1-98f1-01efa5f15a58\") " pod="tigera-operator/tigera-operator-7dcd859c48-h5txc" Dec 16 03:14:58.793454 systemd[1]: Created slice kubepods-besteffort-podce78617b_4122_42d1_98f1_01efa5f15a58.slice - libcontainer container kubepods-besteffort-podce78617b_4122_42d1_98f1_01efa5f15a58.slice. Dec 16 03:14:58.817551 containerd[1591]: time="2025-12-16T03:14:58.817490405Z" level=info msg="connecting to shim 506a5500df49006918e60185cc32e562f960543c62474204aaabf2e7725e484e" address="unix:///run/containerd/s/1f6be3756857add1fa112b3ff534164e727f6a14d02279c492cc4acc94a670a3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:58.874432 systemd[1]: Started cri-containerd-506a5500df49006918e60185cc32e562f960543c62474204aaabf2e7725e484e.scope - libcontainer container 506a5500df49006918e60185cc32e562f960543c62474204aaabf2e7725e484e. Dec 16 03:14:58.951000 audit: BPF prog-id=131 op=LOAD Dec 16 03:14:58.953944 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 03:14:58.954076 kernel: audit: type=1334 audit(1765854898.951:436): prog-id=131 op=LOAD Dec 16 03:14:58.960425 kernel: audit: type=1334 audit(1765854898.955:437): prog-id=132 op=LOAD Dec 16 03:14:58.960532 kernel: audit: type=1300 audit(1765854898.955:437): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit: BPF prog-id=132 op=LOAD Dec 16 03:14:58.955000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.955000 audit: BPF prog-id=132 op=UNLOAD Dec 16 03:14:58.969148 kernel: audit: type=1327 audit(1765854898.955:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.969276 kernel: audit: type=1334 audit(1765854898.955:438): prog-id=132 op=UNLOAD Dec 16 03:14:58.969302 kernel: audit: type=1300 audit(1765854898.955:438): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.979814 kernel: audit: type=1327 audit(1765854898.955:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.955000 audit: BPF prog-id=133 op=LOAD Dec 16 03:14:58.955000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.986365 kernel: audit: type=1334 audit(1765854898.955:439): prog-id=133 op=LOAD Dec 16 03:14:58.986426 kernel: audit: type=1300 audit(1765854898.955:439): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.990523 kernel: audit: type=1327 audit(1765854898.955:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.955000 audit: BPF prog-id=134 op=LOAD Dec 16 03:14:58.955000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.955000 audit: BPF prog-id=134 op=UNLOAD Dec 16 03:14:58.955000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.955000 audit: BPF prog-id=133 op=UNLOAD Dec 16 03:14:58.955000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:58.955000 audit: BPF prog-id=135 op=LOAD Dec 16 03:14:58.955000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530366135353030646634393030363931386536303138356363333265 Dec 16 03:14:59.001917 containerd[1591]: time="2025-12-16T03:14:59.001815087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vschq,Uid:3d18080b-3498-4d23-a082-e934abc67ea7,Namespace:kube-system,Attempt:0,} returns sandbox id \"506a5500df49006918e60185cc32e562f960543c62474204aaabf2e7725e484e\"" Dec 16 03:14:59.003118 kubelet[2815]: E1216 03:14:59.003059 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:59.010856 containerd[1591]: time="2025-12-16T03:14:59.010776839Z" level=info msg="CreateContainer within sandbox \"506a5500df49006918e60185cc32e562f960543c62474204aaabf2e7725e484e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:14:59.024559 containerd[1591]: time="2025-12-16T03:14:59.023755805Z" level=info msg="Container 67c11b430f9b97a9bd85be78a3f702d3f39d313bb2234389fb3b3127d2baf5af: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:59.032314 containerd[1591]: time="2025-12-16T03:14:59.032261268Z" level=info msg="CreateContainer within sandbox \"506a5500df49006918e60185cc32e562f960543c62474204aaabf2e7725e484e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"67c11b430f9b97a9bd85be78a3f702d3f39d313bb2234389fb3b3127d2baf5af\"" Dec 16 03:14:59.034635 containerd[1591]: time="2025-12-16T03:14:59.034562948Z" level=info msg="StartContainer for \"67c11b430f9b97a9bd85be78a3f702d3f39d313bb2234389fb3b3127d2baf5af\"" Dec 16 03:14:59.037687 containerd[1591]: time="2025-12-16T03:14:59.037633744Z" level=info msg="connecting to shim 67c11b430f9b97a9bd85be78a3f702d3f39d313bb2234389fb3b3127d2baf5af" address="unix:///run/containerd/s/1f6be3756857add1fa112b3ff534164e727f6a14d02279c492cc4acc94a670a3" protocol=ttrpc version=3 Dec 16 03:14:59.063074 systemd[1]: Started cri-containerd-67c11b430f9b97a9bd85be78a3f702d3f39d313bb2234389fb3b3127d2baf5af.scope - libcontainer container 67c11b430f9b97a9bd85be78a3f702d3f39d313bb2234389fb3b3127d2baf5af. Dec 16 03:14:59.115220 containerd[1591]: time="2025-12-16T03:14:59.115081220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-h5txc,Uid:ce78617b-4122-42d1-98f1-01efa5f15a58,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:14:59.121000 audit: BPF prog-id=136 op=LOAD Dec 16 03:14:59.121000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2886 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637633131623433306639623937613962643835626537386133663730 Dec 16 03:14:59.121000 audit: BPF prog-id=137 op=LOAD Dec 16 03:14:59.121000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2886 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637633131623433306639623937613962643835626537386133663730 Dec 16 03:14:59.121000 audit: BPF prog-id=137 op=UNLOAD Dec 16 03:14:59.121000 audit[2925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637633131623433306639623937613962643835626537386133663730 Dec 16 03:14:59.121000 audit: BPF prog-id=136 op=UNLOAD Dec 16 03:14:59.121000 audit[2925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637633131623433306639623937613962643835626537386133663730 Dec 16 03:14:59.121000 audit: BPF prog-id=138 op=LOAD Dec 16 03:14:59.121000 audit[2925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2886 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637633131623433306639623937613962643835626537386133663730 Dec 16 03:14:59.146119 containerd[1591]: time="2025-12-16T03:14:59.146068946Z" level=info msg="connecting to shim 89134a5b648d189bc1b1563a8c01fbe28c0bc8efb399c3b8369d1b13b0e68dcd" address="unix:///run/containerd/s/1212b2041f7602777c022692edb97b4eac2b8fc4636ca9ca3f9d1e2596d92279" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:59.157059 containerd[1591]: time="2025-12-16T03:14:59.156991214Z" level=info msg="StartContainer for \"67c11b430f9b97a9bd85be78a3f702d3f39d313bb2234389fb3b3127d2baf5af\" returns successfully" Dec 16 03:14:59.193158 systemd[1]: Started cri-containerd-89134a5b648d189bc1b1563a8c01fbe28c0bc8efb399c3b8369d1b13b0e68dcd.scope - libcontainer container 89134a5b648d189bc1b1563a8c01fbe28c0bc8efb399c3b8369d1b13b0e68dcd. Dec 16 03:14:59.212000 audit: BPF prog-id=139 op=LOAD Dec 16 03:14:59.213000 audit: BPF prog-id=140 op=LOAD Dec 16 03:14:59.213000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2961 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313334613562363438643138396263316231353633613863303166 Dec 16 03:14:59.213000 audit: BPF prog-id=140 op=UNLOAD Dec 16 03:14:59.213000 audit[2975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313334613562363438643138396263316231353633613863303166 Dec 16 03:14:59.213000 audit: BPF prog-id=141 op=LOAD Dec 16 03:14:59.213000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2961 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313334613562363438643138396263316231353633613863303166 Dec 16 03:14:59.213000 audit: BPF prog-id=142 op=LOAD Dec 16 03:14:59.213000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2961 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313334613562363438643138396263316231353633613863303166 Dec 16 03:14:59.213000 audit: BPF prog-id=142 op=UNLOAD Dec 16 03:14:59.213000 audit[2975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313334613562363438643138396263316231353633613863303166 Dec 16 03:14:59.213000 audit: BPF prog-id=141 op=UNLOAD Dec 16 03:14:59.213000 audit[2975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313334613562363438643138396263316231353633613863303166 Dec 16 03:14:59.213000 audit: BPF prog-id=143 op=LOAD Dec 16 03:14:59.213000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2961 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313334613562363438643138396263316231353633613863303166 Dec 16 03:14:59.227525 kubelet[2815]: E1216 03:14:59.227431 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:14:59.291035 containerd[1591]: time="2025-12-16T03:14:59.289869400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-h5txc,Uid:ce78617b-4122-42d1-98f1-01efa5f15a58,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"89134a5b648d189bc1b1563a8c01fbe28c0bc8efb399c3b8369d1b13b0e68dcd\"" Dec 16 03:14:59.295511 containerd[1591]: time="2025-12-16T03:14:59.295058718Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:14:59.298289 systemd-resolved[1273]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 16 03:14:59.478000 audit[3037]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.478000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8f4bdc40 a2=0 a3=7ffd8f4bdc2c items=0 ppid=2938 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:14:59.484000 audit[3040]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.484000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc591d9d60 a2=0 a3=7ffc591d9d4c items=0 ppid=2938 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:14:59.487000 audit[3041]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.487000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd45731160 a2=0 a3=7ffd4573114c items=0 ppid=2938 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:14:59.488000 audit[3042]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.488000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5f446800 a2=0 a3=7fff5f4467ec items=0 ppid=2938 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.488000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:14:59.491000 audit[3043]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.491000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec0f54970 a2=0 a3=7ffec0f5495c items=0 ppid=2938 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:14:59.493000 audit[3044]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.493000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7013cb00 a2=0 a3=7ffc7013caec items=0 ppid=2938 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:14:59.574854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138115187.mount: Deactivated successfully. Dec 16 03:14:59.589000 audit[3045]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.589000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe65fcc190 a2=0 a3=7ffe65fcc17c items=0 ppid=2938 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:14:59.595000 audit[3047]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.595000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff52865340 a2=0 a3=7fff5286532c items=0 ppid=2938 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 03:14:59.600000 audit[3050]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.600000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc289b4c0 a2=0 a3=7fffc289b4ac items=0 ppid=2938 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 03:14:59.602000 audit[3051]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.602000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcba094fb0 a2=0 a3=7ffcba094f9c items=0 ppid=2938 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:14:59.606000 audit[3053]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.606000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5746fd00 a2=0 a3=7fff5746fcec items=0 ppid=2938 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:14:59.607000 audit[3054]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.607000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc16b918d0 a2=0 a3=7ffc16b918bc items=0 ppid=2938 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:14:59.611000 audit[3056]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.611000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd0ce0fed0 a2=0 a3=7ffd0ce0febc items=0 ppid=2938 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.611000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:14:59.616000 audit[3059]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.616000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffffbbdc030 a2=0 a3=7ffffbbdc01c items=0 ppid=2938 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.616000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 03:14:59.618000 audit[3060]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.618000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2ade91c0 a2=0 a3=7ffc2ade91ac items=0 ppid=2938 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:14:59.621000 audit[3062]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.621000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfe76cef0 a2=0 a3=7ffdfe76cedc items=0 ppid=2938 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:14:59.623000 audit[3063]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.623000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8b310ef0 a2=0 a3=7ffc8b310edc items=0 ppid=2938 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:14:59.627000 audit[3065]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.627000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc090f5230 a2=0 a3=7ffc090f521c items=0 ppid=2938 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:14:59.633000 audit[3068]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.633000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcf089c20 a2=0 a3=7fffcf089c0c items=0 ppid=2938 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:14:59.638000 audit[3071]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.638000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6a938480 a2=0 a3=7fff6a93846c items=0 ppid=2938 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:14:59.641000 audit[3072]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.641000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3df82c50 a2=0 a3=7ffe3df82c3c items=0 ppid=2938 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.641000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:14:59.649000 audit[3074]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.649000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc0a918010 a2=0 a3=7ffc0a917ffc items=0 ppid=2938 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:14:59.655000 audit[3077]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.655000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc011b90f0 a2=0 a3=7ffc011b90dc items=0 ppid=2938 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.655000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:14:59.657000 audit[3078]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.657000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8656df60 a2=0 a3=7fff8656df4c items=0 ppid=2938 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.657000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:14:59.661000 audit[3080]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:59.661000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdd62ca5b0 a2=0 a3=7ffdd62ca59c items=0 ppid=2938 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:14:59.691000 audit[3086]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:59.691000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe83e5ba60 a2=0 a3=7ffe83e5ba4c items=0 ppid=2938 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:59.700000 audit[3086]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:59.700000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe83e5ba60 a2=0 a3=7ffe83e5ba4c items=0 ppid=2938 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.700000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:59.703000 audit[3091]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.703000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcaf40e820 a2=0 a3=7ffcaf40e80c items=0 ppid=2938 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.703000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:14:59.707000 audit[3093]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.707000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe68bc1de0 a2=0 a3=7ffe68bc1dcc items=0 ppid=2938 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 03:14:59.713000 audit[3096]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.713000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc95824760 a2=0 a3=7ffc9582474c items=0 ppid=2938 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 03:14:59.715000 audit[3097]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.715000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcec0b1eb0 a2=0 a3=7ffcec0b1e9c items=0 ppid=2938 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:14:59.719000 audit[3099]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.719000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd1cc2580 a2=0 a3=7fffd1cc256c items=0 ppid=2938 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.719000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:14:59.720000 audit[3100]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.720000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeab05cc50 a2=0 a3=7ffeab05cc3c items=0 ppid=2938 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:14:59.724000 audit[3102]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.724000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc84fda020 a2=0 a3=7ffc84fda00c items=0 ppid=2938 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.724000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 03:14:59.729000 audit[3105]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.729000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc0a0ab640 a2=0 a3=7ffc0a0ab62c items=0 ppid=2938 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.729000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:14:59.731000 audit[3106]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.731000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0b93a5e0 a2=0 a3=7ffc0b93a5cc items=0 ppid=2938 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.731000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:14:59.735000 audit[3108]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.735000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda037b850 a2=0 a3=7ffda037b83c items=0 ppid=2938 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.735000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:14:59.736000 audit[3109]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.736000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf9055cd0 a2=0 a3=7ffcf9055cbc items=0 ppid=2938 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:14:59.740000 audit[3111]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.740000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc3392810 a2=0 a3=7ffcc33927fc items=0 ppid=2938 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.740000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:14:59.748000 audit[3114]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.748000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd0a5bcb0 a2=0 a3=7ffdd0a5bc9c items=0 ppid=2938 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:14:59.757000 audit[3117]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.757000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca1f27d00 a2=0 a3=7ffca1f27cec items=0 ppid=2938 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 03:14:59.760000 audit[3118]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.760000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc08bc9440 a2=0 a3=7ffc08bc942c items=0 ppid=2938 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:14:59.765000 audit[3120]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.765000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcf5fc3a20 a2=0 a3=7ffcf5fc3a0c items=0 ppid=2938 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:14:59.770000 audit[3123]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.770000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd24d509e0 a2=0 a3=7ffd24d509cc items=0 ppid=2938 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:14:59.772000 audit[3124]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.772000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda0028fb0 a2=0 a3=7ffda0028f9c items=0 ppid=2938 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:14:59.777000 audit[3126]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.777000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff6f27d700 a2=0 a3=7fff6f27d6ec items=0 ppid=2938 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:14:59.779000 audit[3127]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.779000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff447a920 a2=0 a3=7ffff447a90c items=0 ppid=2938 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:14:59.784000 audit[3129]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.784000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffde2071080 a2=0 a3=7ffde207106c items=0 ppid=2938 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:14:59.790000 audit[3132]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:59.790000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc28c9f40 a2=0 a3=7ffcc28c9f2c items=0 ppid=2938 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:14:59.796000 audit[3134]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:14:59.796000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffda9a50c40 a2=0 a3=7ffda9a50c2c items=0 ppid=2938 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.796000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:59.797000 audit[3134]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:14:59.797000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffda9a50c40 a2=0 a3=7ffda9a50c2c items=0 ppid=2938 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.797000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:01.041652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226196491.mount: Deactivated successfully. Dec 16 03:15:03.082655 containerd[1591]: time="2025-12-16T03:15:03.081826274Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:03.084529 containerd[1591]: time="2025-12-16T03:15:03.084472514Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 03:15:03.085718 containerd[1591]: time="2025-12-16T03:15:03.085667582Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:03.089510 containerd[1591]: time="2025-12-16T03:15:03.089413189Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:03.091678 containerd[1591]: time="2025-12-16T03:15:03.091518384Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.795848658s" Dec 16 03:15:03.091678 containerd[1591]: time="2025-12-16T03:15:03.091569727Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:15:03.098837 containerd[1591]: time="2025-12-16T03:15:03.098739787Z" level=info msg="CreateContainer within sandbox \"89134a5b648d189bc1b1563a8c01fbe28c0bc8efb399c3b8369d1b13b0e68dcd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:15:03.119530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723615371.mount: Deactivated successfully. Dec 16 03:15:03.124828 containerd[1591]: time="2025-12-16T03:15:03.122654753Z" level=info msg="Container efa13180af26dd29923e387ffa2c5191a37dd49b791e0d20939e78ca11ab530d: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:03.136835 containerd[1591]: time="2025-12-16T03:15:03.136751459Z" level=info msg="CreateContainer within sandbox \"89134a5b648d189bc1b1563a8c01fbe28c0bc8efb399c3b8369d1b13b0e68dcd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"efa13180af26dd29923e387ffa2c5191a37dd49b791e0d20939e78ca11ab530d\"" Dec 16 03:15:03.138386 containerd[1591]: time="2025-12-16T03:15:03.138313442Z" level=info msg="StartContainer for \"efa13180af26dd29923e387ffa2c5191a37dd49b791e0d20939e78ca11ab530d\"" Dec 16 03:15:03.140207 containerd[1591]: time="2025-12-16T03:15:03.140133715Z" level=info msg="connecting to shim efa13180af26dd29923e387ffa2c5191a37dd49b791e0d20939e78ca11ab530d" address="unix:///run/containerd/s/1212b2041f7602777c022692edb97b4eac2b8fc4636ca9ca3f9d1e2596d92279" protocol=ttrpc version=3 Dec 16 03:15:03.185375 systemd[1]: Started cri-containerd-efa13180af26dd29923e387ffa2c5191a37dd49b791e0d20939e78ca11ab530d.scope - libcontainer container efa13180af26dd29923e387ffa2c5191a37dd49b791e0d20939e78ca11ab530d. Dec 16 03:15:03.209000 audit: BPF prog-id=144 op=LOAD Dec 16 03:15:03.210000 audit: BPF prog-id=145 op=LOAD Dec 16 03:15:03.210000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2961 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613133313830616632366464323939323365333837666661326335 Dec 16 03:15:03.210000 audit: BPF prog-id=145 op=UNLOAD Dec 16 03:15:03.210000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613133313830616632366464323939323365333837666661326335 Dec 16 03:15:03.210000 audit: BPF prog-id=146 op=LOAD Dec 16 03:15:03.210000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2961 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613133313830616632366464323939323365333837666661326335 Dec 16 03:15:03.210000 audit: BPF prog-id=147 op=LOAD Dec 16 03:15:03.210000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2961 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613133313830616632366464323939323365333837666661326335 Dec 16 03:15:03.211000 audit: BPF prog-id=147 op=UNLOAD Dec 16 03:15:03.211000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613133313830616632366464323939323365333837666661326335 Dec 16 03:15:03.211000 audit: BPF prog-id=146 op=UNLOAD Dec 16 03:15:03.211000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613133313830616632366464323939323365333837666661326335 Dec 16 03:15:03.211000 audit: BPF prog-id=148 op=LOAD Dec 16 03:15:03.211000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2961 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613133313830616632366464323939323365333837666661326335 Dec 16 03:15:03.256614 containerd[1591]: time="2025-12-16T03:15:03.256538051Z" level=info msg="StartContainer for \"efa13180af26dd29923e387ffa2c5191a37dd49b791e0d20939e78ca11ab530d\" returns successfully" Dec 16 03:15:03.492832 kubelet[2815]: E1216 03:15:03.492308 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:03.534003 kubelet[2815]: I1216 03:15:03.533918 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vschq" podStartSLOduration=5.52695559 podStartE2EDuration="5.52695559s" podCreationTimestamp="2025-12-16 03:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:14:59.251521868 +0000 UTC m=+5.317301936" watchObservedRunningTime="2025-12-16 03:15:03.52695559 +0000 UTC m=+9.592735666" Dec 16 03:15:04.178596 kubelet[2815]: E1216 03:15:04.178553 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:04.264107 kubelet[2815]: E1216 03:15:04.263758 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:04.266528 kubelet[2815]: E1216 03:15:04.265747 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:04.280968 kubelet[2815]: I1216 03:15:04.280886 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-h5txc" podStartSLOduration=2.481339249 podStartE2EDuration="6.280863496s" podCreationTimestamp="2025-12-16 03:14:58 +0000 UTC" firstStartedPulling="2025-12-16 03:14:59.294091853 +0000 UTC m=+5.359871909" lastFinishedPulling="2025-12-16 03:15:03.093616088 +0000 UTC m=+9.159396156" observedRunningTime="2025-12-16 03:15:04.278264312 +0000 UTC m=+10.344044377" watchObservedRunningTime="2025-12-16 03:15:04.280863496 +0000 UTC m=+10.346643572" Dec 16 03:15:05.343277 kubelet[2815]: E1216 03:15:05.342070 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:06.271878 kubelet[2815]: E1216 03:15:06.271411 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:09.187583 sudo[1855]: pam_unix(sudo:session): session closed for user root Dec 16 03:15:09.192083 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 03:15:09.192220 kernel: audit: type=1106 audit(1765854909.186:516): pid=1855 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:15:09.186000 audit[1855]: USER_END pid=1855 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:15:09.192346 sshd[1854]: Connection closed by 147.75.109.163 port 41430 Dec 16 03:15:09.187000 audit[1855]: CRED_DISP pid=1855 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:15:09.193739 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:09.195942 kernel: audit: type=1104 audit(1765854909.187:517): pid=1855 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:15:09.198000 audit[1850]: USER_END pid=1850 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:09.206218 kernel: audit: type=1106 audit(1765854909.198:518): pid=1850 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:09.205069 systemd[1]: sshd@8-146.190.151.166:22-147.75.109.163:41430.service: Deactivated successfully. Dec 16 03:15:09.208366 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:15:09.209219 systemd[1]: session-10.scope: Consumed 5.286s CPU time, 154.2M memory peak. Dec 16 03:15:09.211266 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:15:09.198000 audit[1850]: CRED_DISP pid=1850 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:09.218907 kernel: audit: type=1104 audit(1765854909.198:519): pid=1850 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:09.221670 systemd-logind[1566]: Removed session 10. Dec 16 03:15:09.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-146.190.151.166:22-147.75.109.163:41430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:09.227820 kernel: audit: type=1131 audit(1765854909.204:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-146.190.151.166:22-147.75.109.163:41430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:10.043000 audit[3225]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:10.047814 kernel: audit: type=1325 audit(1765854910.043:521): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:10.043000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe3b03e9b0 a2=0 a3=7ffe3b03e99c items=0 ppid=2938 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.054315 kernel: audit: type=1300 audit(1765854910.043:521): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe3b03e9b0 a2=0 a3=7ffe3b03e99c items=0 ppid=2938 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:10.058822 kernel: audit: type=1327 audit(1765854910.043:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:10.062856 kernel: audit: type=1325 audit(1765854910.050:522): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:10.050000 audit[3225]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:10.050000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3b03e9b0 a2=0 a3=0 items=0 ppid=2938 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.069819 kernel: audit: type=1300 audit(1765854910.050:522): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3b03e9b0 a2=0 a3=0 items=0 ppid=2938 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.050000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:10.218000 audit[3227]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:10.218000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd5cbb4090 a2=0 a3=7ffd5cbb407c items=0 ppid=2938 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:10.233000 audit[3227]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:10.233000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd5cbb4090 a2=0 a3=0 items=0 ppid=2938 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:13.320000 audit[3231]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:13.320000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff11db41d0 a2=0 a3=7fff11db41bc items=0 ppid=2938 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:13.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:13.324000 audit[3231]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:13.324000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff11db41d0 a2=0 a3=0 items=0 ppid=2938 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:13.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:13.364000 audit[3233]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:13.364000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe6f312420 a2=0 a3=7ffe6f31240c items=0 ppid=2938 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:13.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:13.368000 audit[3233]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:13.368000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe6f312420 a2=0 a3=0 items=0 ppid=2938 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:13.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:14.443606 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 03:15:14.443758 kernel: audit: type=1325 audit(1765854914.437:529): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:14.437000 audit[3237]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:14.437000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc8d82a00 a2=0 a3=7ffcc8d829ec items=0 ppid=2938 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:14.449925 kernel: audit: type=1300 audit(1765854914.437:529): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc8d82a00 a2=0 a3=7ffcc8d829ec items=0 ppid=2938 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:14.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:14.454821 kernel: audit: type=1327 audit(1765854914.437:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:14.454000 audit[3237]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:14.459816 kernel: audit: type=1325 audit(1765854914.454:530): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:14.454000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc8d82a00 a2=0 a3=0 items=0 ppid=2938 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:14.464808 kernel: audit: type=1300 audit(1765854914.454:530): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc8d82a00 a2=0 a3=0 items=0 ppid=2938 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:14.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:14.473803 kernel: audit: type=1327 audit(1765854914.454:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:15.660000 audit[3239]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:15.663832 kernel: audit: type=1325 audit(1765854915.660:531): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:15.660000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffecfb51e30 a2=0 a3=7ffecfb51e1c items=0 ppid=2938 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:15.668819 kernel: audit: type=1300 audit(1765854915.660:531): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffecfb51e30 a2=0 a3=7ffecfb51e1c items=0 ppid=2938 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:15.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:15.677860 kernel: audit: type=1327 audit(1765854915.660:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:15.678021 kernel: audit: type=1325 audit(1765854915.668:532): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:15.668000 audit[3239]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:15.668000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffecfb51e30 a2=0 a3=0 items=0 ppid=2938 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:15.668000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:15.736753 systemd[1]: Created slice kubepods-besteffort-pod91c000ba_0b01_4745_8687_963bd800c148.slice - libcontainer container kubepods-besteffort-pod91c000ba_0b01_4745_8687_963bd800c148.slice. Dec 16 03:15:15.789967 kubelet[2815]: I1216 03:15:15.789771 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91c000ba-0b01-4745-8687-963bd800c148-tigera-ca-bundle\") pod \"calico-typha-5f698866d7-w4dxm\" (UID: \"91c000ba-0b01-4745-8687-963bd800c148\") " pod="calico-system/calico-typha-5f698866d7-w4dxm" Dec 16 03:15:15.790982 kubelet[2815]: I1216 03:15:15.789947 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/91c000ba-0b01-4745-8687-963bd800c148-typha-certs\") pod \"calico-typha-5f698866d7-w4dxm\" (UID: \"91c000ba-0b01-4745-8687-963bd800c148\") " pod="calico-system/calico-typha-5f698866d7-w4dxm" Dec 16 03:15:15.790982 kubelet[2815]: I1216 03:15:15.790913 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4nw\" (UniqueName: \"kubernetes.io/projected/91c000ba-0b01-4745-8687-963bd800c148-kube-api-access-6v4nw\") pod \"calico-typha-5f698866d7-w4dxm\" (UID: \"91c000ba-0b01-4745-8687-963bd800c148\") " pod="calico-system/calico-typha-5f698866d7-w4dxm" Dec 16 03:15:16.030394 systemd[1]: Created slice kubepods-besteffort-pod9b0bfa67_300d_4e4a_950f_06115edcd6a3.slice - libcontainer container kubepods-besteffort-pod9b0bfa67_300d_4e4a_950f_06115edcd6a3.slice. Dec 16 03:15:16.041478 kubelet[2815]: E1216 03:15:16.040776 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:16.042628 containerd[1591]: time="2025-12-16T03:15:16.042576970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f698866d7-w4dxm,Uid:91c000ba-0b01-4745-8687-963bd800c148,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:16.076968 containerd[1591]: time="2025-12-16T03:15:16.076832257Z" level=info msg="connecting to shim d52ef8f803db7a96b5f2c9cf2478b5d9bdaa0fed90e865593c296a0d9b39382b" address="unix:///run/containerd/s/f5856155222aca4949d43940335a062b33ee897208dfb024f57dfaeafd9c68cc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:16.095422 kubelet[2815]: I1216 03:15:16.095364 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-xtables-lock\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.096840 kubelet[2815]: I1216 03:15:16.096034 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-var-run-calico\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.097330 kubelet[2815]: I1216 03:15:16.097123 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-cni-net-dir\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.097706 kubelet[2815]: I1216 03:15:16.097574 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-cni-bin-dir\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.098230 kubelet[2815]: I1216 03:15:16.097992 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpg6x\" (UniqueName: \"kubernetes.io/projected/9b0bfa67-300d-4e4a-950f-06115edcd6a3-kube-api-access-cpg6x\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.099867 kubelet[2815]: I1216 03:15:16.098409 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-var-lib-calico\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.101716 kubelet[2815]: I1216 03:15:16.100013 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-cni-log-dir\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.101716 kubelet[2815]: I1216 03:15:16.101607 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-flexvol-driver-host\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.101716 kubelet[2815]: I1216 03:15:16.101649 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-lib-modules\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.101716 kubelet[2815]: I1216 03:15:16.101666 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0bfa67-300d-4e4a-950f-06115edcd6a3-tigera-ca-bundle\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.101716 kubelet[2815]: I1216 03:15:16.101694 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b0bfa67-300d-4e4a-950f-06115edcd6a3-node-certs\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.102015 kubelet[2815]: I1216 03:15:16.101819 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b0bfa67-300d-4e4a-950f-06115edcd6a3-policysync\") pod \"calico-node-n9gwf\" (UID: \"9b0bfa67-300d-4e4a-950f-06115edcd6a3\") " pod="calico-system/calico-node-n9gwf" Dec 16 03:15:16.125295 systemd[1]: Started cri-containerd-d52ef8f803db7a96b5f2c9cf2478b5d9bdaa0fed90e865593c296a0d9b39382b.scope - libcontainer container d52ef8f803db7a96b5f2c9cf2478b5d9bdaa0fed90e865593c296a0d9b39382b. Dec 16 03:15:16.168478 kubelet[2815]: E1216 03:15:16.167599 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:16.180000 audit: BPF prog-id=149 op=LOAD Dec 16 03:15:16.184000 audit: BPF prog-id=150 op=LOAD Dec 16 03:15:16.184000 audit[3262]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3251 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326566386638303364623761393662356632633963663234373862 Dec 16 03:15:16.184000 audit: BPF prog-id=150 op=UNLOAD Dec 16 03:15:16.184000 audit[3262]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3251 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326566386638303364623761393662356632633963663234373862 Dec 16 03:15:16.186000 audit: BPF prog-id=151 op=LOAD Dec 16 03:15:16.186000 audit[3262]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3251 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326566386638303364623761393662356632633963663234373862 Dec 16 03:15:16.187000 audit: BPF prog-id=152 op=LOAD Dec 16 03:15:16.187000 audit[3262]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3251 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326566386638303364623761393662356632633963663234373862 Dec 16 03:15:16.188000 audit: BPF prog-id=152 op=UNLOAD Dec 16 03:15:16.188000 audit[3262]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3251 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326566386638303364623761393662356632633963663234373862 Dec 16 03:15:16.188000 audit: BPF prog-id=151 op=UNLOAD Dec 16 03:15:16.188000 audit[3262]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3251 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326566386638303364623761393662356632633963663234373862 Dec 16 03:15:16.188000 audit: BPF prog-id=153 op=LOAD Dec 16 03:15:16.188000 audit[3262]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3251 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435326566386638303364623761393662356632633963663234373862 Dec 16 03:15:16.203534 kubelet[2815]: I1216 03:15:16.203055 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d2d2c-6593-4c7a-9cdf-35214b834c16-registration-dir\") pod \"csi-node-driver-jz64m\" (UID: \"2f0d2d2c-6593-4c7a-9cdf-35214b834c16\") " pod="calico-system/csi-node-driver-jz64m" Dec 16 03:15:16.203534 kubelet[2815]: I1216 03:15:16.203265 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d2d2c-6593-4c7a-9cdf-35214b834c16-kubelet-dir\") pod \"csi-node-driver-jz64m\" (UID: \"2f0d2d2c-6593-4c7a-9cdf-35214b834c16\") " pod="calico-system/csi-node-driver-jz64m" Dec 16 03:15:16.203534 kubelet[2815]: I1216 03:15:16.203308 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2f0d2d2c-6593-4c7a-9cdf-35214b834c16-varrun\") pod \"csi-node-driver-jz64m\" (UID: \"2f0d2d2c-6593-4c7a-9cdf-35214b834c16\") " pod="calico-system/csi-node-driver-jz64m" Dec 16 03:15:16.203945 kubelet[2815]: I1216 03:15:16.203840 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2f0d2d2c-6593-4c7a-9cdf-35214b834c16-socket-dir\") pod \"csi-node-driver-jz64m\" (UID: \"2f0d2d2c-6593-4c7a-9cdf-35214b834c16\") " pod="calico-system/csi-node-driver-jz64m" Dec 16 03:15:16.204277 kubelet[2815]: I1216 03:15:16.204216 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59mz2\" (UniqueName: \"kubernetes.io/projected/2f0d2d2c-6593-4c7a-9cdf-35214b834c16-kube-api-access-59mz2\") pod \"csi-node-driver-jz64m\" (UID: \"2f0d2d2c-6593-4c7a-9cdf-35214b834c16\") " pod="calico-system/csi-node-driver-jz64m" Dec 16 03:15:16.213716 kubelet[2815]: E1216 03:15:16.213677 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.213716 kubelet[2815]: W1216 03:15:16.213707 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.213913 kubelet[2815]: E1216 03:15:16.213736 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.214894 kubelet[2815]: E1216 03:15:16.214868 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.214894 kubelet[2815]: W1216 03:15:16.214889 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.215182 kubelet[2815]: E1216 03:15:16.214908 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.216537 kubelet[2815]: E1216 03:15:16.216515 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.216537 kubelet[2815]: W1216 03:15:16.216535 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.216693 kubelet[2815]: E1216 03:15:16.216553 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.216749 kubelet[2815]: E1216 03:15:16.216729 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.216749 kubelet[2815]: W1216 03:15:16.216736 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.216749 kubelet[2815]: E1216 03:15:16.216747 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.216933 kubelet[2815]: E1216 03:15:16.216921 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.216933 kubelet[2815]: W1216 03:15:16.216930 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.217032 kubelet[2815]: E1216 03:15:16.216940 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.217089 kubelet[2815]: E1216 03:15:16.217078 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.217089 kubelet[2815]: W1216 03:15:16.217087 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.217219 kubelet[2815]: E1216 03:15:16.217095 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.217282 kubelet[2815]: E1216 03:15:16.217267 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.217282 kubelet[2815]: W1216 03:15:16.217277 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.217388 kubelet[2815]: E1216 03:15:16.217286 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.217897 kubelet[2815]: E1216 03:15:16.217424 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.217897 kubelet[2815]: W1216 03:15:16.217430 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.217897 kubelet[2815]: E1216 03:15:16.217437 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.217897 kubelet[2815]: E1216 03:15:16.217580 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.217897 kubelet[2815]: W1216 03:15:16.217586 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.217897 kubelet[2815]: E1216 03:15:16.217595 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.217897 kubelet[2815]: E1216 03:15:16.217745 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.217897 kubelet[2815]: W1216 03:15:16.217753 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.217897 kubelet[2815]: E1216 03:15:16.217762 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.218175 kubelet[2815]: E1216 03:15:16.217927 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.218175 kubelet[2815]: W1216 03:15:16.217933 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.218175 kubelet[2815]: E1216 03:15:16.217941 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.218175 kubelet[2815]: E1216 03:15:16.218083 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.218175 kubelet[2815]: W1216 03:15:16.218090 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.218175 kubelet[2815]: E1216 03:15:16.218098 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.218339 kubelet[2815]: E1216 03:15:16.218239 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.218339 kubelet[2815]: W1216 03:15:16.218245 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.218339 kubelet[2815]: E1216 03:15:16.218252 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.218889 kubelet[2815]: E1216 03:15:16.218386 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.218889 kubelet[2815]: W1216 03:15:16.218392 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.218889 kubelet[2815]: E1216 03:15:16.218399 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.218889 kubelet[2815]: E1216 03:15:16.218528 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.218889 kubelet[2815]: W1216 03:15:16.218535 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.218889 kubelet[2815]: E1216 03:15:16.218541 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.218889 kubelet[2815]: E1216 03:15:16.218653 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.218889 kubelet[2815]: W1216 03:15:16.218659 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.218889 kubelet[2815]: E1216 03:15:16.218666 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.218889 kubelet[2815]: E1216 03:15:16.218794 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.219127 kubelet[2815]: W1216 03:15:16.218800 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.219127 kubelet[2815]: E1216 03:15:16.218809 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.219127 kubelet[2815]: E1216 03:15:16.218937 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.219127 kubelet[2815]: W1216 03:15:16.218943 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.219127 kubelet[2815]: E1216 03:15:16.218950 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.219127 kubelet[2815]: E1216 03:15:16.219068 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.219127 kubelet[2815]: W1216 03:15:16.219074 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.219127 kubelet[2815]: E1216 03:15:16.219080 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.219334 kubelet[2815]: E1216 03:15:16.219194 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.219334 kubelet[2815]: W1216 03:15:16.219200 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.219334 kubelet[2815]: E1216 03:15:16.219207 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.219427 kubelet[2815]: E1216 03:15:16.219349 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.219427 kubelet[2815]: W1216 03:15:16.219357 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.219427 kubelet[2815]: E1216 03:15:16.219366 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.221044 kubelet[2815]: E1216 03:15:16.219553 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.221044 kubelet[2815]: W1216 03:15:16.219566 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.221044 kubelet[2815]: E1216 03:15:16.219577 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.221483 kubelet[2815]: E1216 03:15:16.221185 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.221483 kubelet[2815]: W1216 03:15:16.221203 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.221483 kubelet[2815]: E1216 03:15:16.221222 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.221743 kubelet[2815]: E1216 03:15:16.221585 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.221743 kubelet[2815]: W1216 03:15:16.221596 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.221743 kubelet[2815]: E1216 03:15:16.221607 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.221898 kubelet[2815]: E1216 03:15:16.221882 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.221926 kubelet[2815]: W1216 03:15:16.221899 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.221926 kubelet[2815]: E1216 03:15:16.221917 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.222712 kubelet[2815]: E1216 03:15:16.222196 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.222712 kubelet[2815]: W1216 03:15:16.222210 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.222712 kubelet[2815]: E1216 03:15:16.222222 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.223805 kubelet[2815]: E1216 03:15:16.222967 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.223914 kubelet[2815]: W1216 03:15:16.223892 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.223999 kubelet[2815]: E1216 03:15:16.223978 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.224491 kubelet[2815]: E1216 03:15:16.224462 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.224646 kubelet[2815]: W1216 03:15:16.224627 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.224726 kubelet[2815]: E1216 03:15:16.224716 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.227526 kubelet[2815]: E1216 03:15:16.227046 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.227646 kubelet[2815]: W1216 03:15:16.227627 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.227706 kubelet[2815]: E1216 03:15:16.227696 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.230622 kubelet[2815]: E1216 03:15:16.230597 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.230762 kubelet[2815]: W1216 03:15:16.230748 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.230865 kubelet[2815]: E1216 03:15:16.230853 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.231769 kubelet[2815]: E1216 03:15:16.231718 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.231769 kubelet[2815]: W1216 03:15:16.231735 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.231769 kubelet[2815]: E1216 03:15:16.231751 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.233477 kubelet[2815]: E1216 03:15:16.233242 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.233477 kubelet[2815]: W1216 03:15:16.233340 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.233477 kubelet[2815]: E1216 03:15:16.233357 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.234174 kubelet[2815]: E1216 03:15:16.233754 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.234174 kubelet[2815]: W1216 03:15:16.233765 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.234174 kubelet[2815]: E1216 03:15:16.233778 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.235945 kubelet[2815]: E1216 03:15:16.234847 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.235945 kubelet[2815]: W1216 03:15:16.234902 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.235945 kubelet[2815]: E1216 03:15:16.234919 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.236386 kubelet[2815]: E1216 03:15:16.236303 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.236386 kubelet[2815]: W1216 03:15:16.236318 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.236386 kubelet[2815]: E1216 03:15:16.236331 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.236722 kubelet[2815]: E1216 03:15:16.236655 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.236722 kubelet[2815]: W1216 03:15:16.236666 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.236722 kubelet[2815]: E1216 03:15:16.236678 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.237444 kubelet[2815]: E1216 03:15:16.237409 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.237444 kubelet[2815]: W1216 03:15:16.237421 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.237444 kubelet[2815]: E1216 03:15:16.237432 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.237960 kubelet[2815]: E1216 03:15:16.237945 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.238142 kubelet[2815]: W1216 03:15:16.238020 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.238142 kubelet[2815]: E1216 03:15:16.238039 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.238381 kubelet[2815]: E1216 03:15:16.238318 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.238559 kubelet[2815]: W1216 03:15:16.238457 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.238647 kubelet[2815]: E1216 03:15:16.238634 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.239305 kubelet[2815]: E1216 03:15:16.239119 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.239305 kubelet[2815]: W1216 03:15:16.239132 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.239305 kubelet[2815]: E1216 03:15:16.239144 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.239567 kubelet[2815]: E1216 03:15:16.239554 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.239738 kubelet[2815]: W1216 03:15:16.239719 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.239958 kubelet[2815]: E1216 03:15:16.239913 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.240644 kubelet[2815]: E1216 03:15:16.240552 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.241758 kubelet[2815]: W1216 03:15:16.240721 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.241758 kubelet[2815]: E1216 03:15:16.240739 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.242160 kubelet[2815]: E1216 03:15:16.242142 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.242427 kubelet[2815]: W1216 03:15:16.242237 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.242427 kubelet[2815]: E1216 03:15:16.242260 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.242655 kubelet[2815]: E1216 03:15:16.242642 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.242711 kubelet[2815]: W1216 03:15:16.242702 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.242761 kubelet[2815]: E1216 03:15:16.242753 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.243109 kubelet[2815]: E1216 03:15:16.243096 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.243177 kubelet[2815]: W1216 03:15:16.243168 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.243306 kubelet[2815]: E1216 03:15:16.243232 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.243656 kubelet[2815]: E1216 03:15:16.243639 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.243976 kubelet[2815]: W1216 03:15:16.243738 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.243976 kubelet[2815]: E1216 03:15:16.243758 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.244988 kubelet[2815]: E1216 03:15:16.244886 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.245250 kubelet[2815]: W1216 03:15:16.245232 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.245438 kubelet[2815]: E1216 03:15:16.245422 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.264108 kubelet[2815]: E1216 03:15:16.264010 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.264108 kubelet[2815]: W1216 03:15:16.264034 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.264108 kubelet[2815]: E1216 03:15:16.264056 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.277739 containerd[1591]: time="2025-12-16T03:15:16.277616209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f698866d7-w4dxm,Uid:91c000ba-0b01-4745-8687-963bd800c148,Namespace:calico-system,Attempt:0,} returns sandbox id \"d52ef8f803db7a96b5f2c9cf2478b5d9bdaa0fed90e865593c296a0d9b39382b\"" Dec 16 03:15:16.280654 kubelet[2815]: E1216 03:15:16.280546 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:16.283124 containerd[1591]: time="2025-12-16T03:15:16.283046097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:15:16.305238 kubelet[2815]: E1216 03:15:16.305202 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.305511 kubelet[2815]: W1216 03:15:16.305379 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.305511 kubelet[2815]: E1216 03:15:16.305411 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.305997 kubelet[2815]: E1216 03:15:16.305935 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.305997 kubelet[2815]: W1216 03:15:16.305951 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.306432 kubelet[2815]: E1216 03:15:16.306149 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.306851 kubelet[2815]: E1216 03:15:16.306824 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.307001 kubelet[2815]: W1216 03:15:16.306839 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.307001 kubelet[2815]: E1216 03:15:16.306944 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.307484 kubelet[2815]: E1216 03:15:16.307460 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.307794 kubelet[2815]: W1216 03:15:16.307658 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.307794 kubelet[2815]: E1216 03:15:16.307675 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.308859 kubelet[2815]: E1216 03:15:16.308806 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.308859 kubelet[2815]: W1216 03:15:16.308823 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.309140 kubelet[2815]: E1216 03:15:16.308840 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.309363 kubelet[2815]: E1216 03:15:16.309316 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.309363 kubelet[2815]: W1216 03:15:16.309327 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.309363 kubelet[2815]: E1216 03:15:16.309339 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.309953 kubelet[2815]: E1216 03:15:16.309840 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.309953 kubelet[2815]: W1216 03:15:16.309871 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.309953 kubelet[2815]: E1216 03:15:16.309887 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.310287 kubelet[2815]: E1216 03:15:16.310246 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.310287 kubelet[2815]: W1216 03:15:16.310257 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.310287 kubelet[2815]: E1216 03:15:16.310268 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.310632 kubelet[2815]: E1216 03:15:16.310592 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.310632 kubelet[2815]: W1216 03:15:16.310603 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.310632 kubelet[2815]: E1216 03:15:16.310613 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.311141 kubelet[2815]: E1216 03:15:16.311121 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.311193 kubelet[2815]: W1216 03:15:16.311141 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.311193 kubelet[2815]: E1216 03:15:16.311159 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.312927 kubelet[2815]: E1216 03:15:16.312896 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.312927 kubelet[2815]: W1216 03:15:16.312924 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.313009 kubelet[2815]: E1216 03:15:16.312945 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.313374 kubelet[2815]: E1216 03:15:16.313259 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.313374 kubelet[2815]: W1216 03:15:16.313272 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.313374 kubelet[2815]: E1216 03:15:16.313284 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.313695 kubelet[2815]: E1216 03:15:16.313602 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.313695 kubelet[2815]: W1216 03:15:16.313613 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.313695 kubelet[2815]: E1216 03:15:16.313624 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.314911 kubelet[2815]: E1216 03:15:16.314895 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.315012 kubelet[2815]: W1216 03:15:16.314971 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.315012 kubelet[2815]: E1216 03:15:16.314989 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.315490 kubelet[2815]: E1216 03:15:16.315353 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.315490 kubelet[2815]: W1216 03:15:16.315378 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.315490 kubelet[2815]: E1216 03:15:16.315389 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.315796 kubelet[2815]: E1216 03:15:16.315776 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.315862 kubelet[2815]: W1216 03:15:16.315853 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.315921 kubelet[2815]: E1216 03:15:16.315912 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.316191 kubelet[2815]: E1216 03:15:16.316111 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.316191 kubelet[2815]: W1216 03:15:16.316121 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.316191 kubelet[2815]: E1216 03:15:16.316131 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.316412 kubelet[2815]: E1216 03:15:16.316402 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.316535 kubelet[2815]: W1216 03:15:16.316453 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.316535 kubelet[2815]: E1216 03:15:16.316467 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.316709 kubelet[2815]: E1216 03:15:16.316699 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.316756 kubelet[2815]: W1216 03:15:16.316748 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.316884 kubelet[2815]: E1216 03:15:16.316832 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.317279 kubelet[2815]: E1216 03:15:16.317198 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.317279 kubelet[2815]: W1216 03:15:16.317210 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.317279 kubelet[2815]: E1216 03:15:16.317220 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.317779 kubelet[2815]: E1216 03:15:16.317644 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.317779 kubelet[2815]: W1216 03:15:16.317655 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.317779 kubelet[2815]: E1216 03:15:16.317666 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.318219 kubelet[2815]: E1216 03:15:16.318206 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.318384 kubelet[2815]: W1216 03:15:16.318276 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.318384 kubelet[2815]: E1216 03:15:16.318290 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.318666 kubelet[2815]: E1216 03:15:16.318655 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.318723 kubelet[2815]: W1216 03:15:16.318714 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.318768 kubelet[2815]: E1216 03:15:16.318760 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.319310 kubelet[2815]: E1216 03:15:16.319293 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.319466 kubelet[2815]: W1216 03:15:16.319370 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.319466 kubelet[2815]: E1216 03:15:16.319384 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.319868 kubelet[2815]: E1216 03:15:16.319818 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.319868 kubelet[2815]: W1216 03:15:16.319829 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.319868 kubelet[2815]: E1216 03:15:16.319840 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.342093 kubelet[2815]: E1216 03:15:16.341632 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:16.351683 containerd[1591]: time="2025-12-16T03:15:16.350920184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9gwf,Uid:9b0bfa67-300d-4e4a-950f-06115edcd6a3,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:16.387805 kubelet[2815]: E1216 03:15:16.386675 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:16.387805 kubelet[2815]: W1216 03:15:16.386810 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:16.387805 kubelet[2815]: E1216 03:15:16.386838 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:16.398596 containerd[1591]: time="2025-12-16T03:15:16.398392477Z" level=info msg="connecting to shim 86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5" address="unix:///run/containerd/s/0f9d157a8ad47c444a4a0a6ba38f3e9ab934fde4a2c61882909aca95a56e534e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:16.436577 systemd[1]: Started cri-containerd-86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5.scope - libcontainer container 86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5. Dec 16 03:15:16.465000 audit: BPF prog-id=154 op=LOAD Dec 16 03:15:16.465000 audit: BPF prog-id=155 op=LOAD Dec 16 03:15:16.465000 audit[3393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3381 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333235663632363665363661373733306366326437393561373330 Dec 16 03:15:16.466000 audit: BPF prog-id=155 op=UNLOAD Dec 16 03:15:16.466000 audit[3393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333235663632363665363661373733306366326437393561373330 Dec 16 03:15:16.466000 audit: BPF prog-id=156 op=LOAD Dec 16 03:15:16.466000 audit[3393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3381 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333235663632363665363661373733306366326437393561373330 Dec 16 03:15:16.466000 audit: BPF prog-id=157 op=LOAD Dec 16 03:15:16.466000 audit[3393]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3381 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333235663632363665363661373733306366326437393561373330 Dec 16 03:15:16.467000 audit: BPF prog-id=157 op=UNLOAD Dec 16 03:15:16.467000 audit[3393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333235663632363665363661373733306366326437393561373330 Dec 16 03:15:16.467000 audit: BPF prog-id=156 op=UNLOAD Dec 16 03:15:16.467000 audit[3393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333235663632363665363661373733306366326437393561373330 Dec 16 03:15:16.467000 audit: BPF prog-id=158 op=LOAD Dec 16 03:15:16.467000 audit[3393]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3381 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333235663632363665363661373733306366326437393561373330 Dec 16 03:15:16.491850 containerd[1591]: time="2025-12-16T03:15:16.491738801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9gwf,Uid:9b0bfa67-300d-4e4a-950f-06115edcd6a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5\"" Dec 16 03:15:16.494546 kubelet[2815]: E1216 03:15:16.494492 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:16.690000 audit[3420]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:16.690000 audit[3420]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd59703440 a2=0 a3=7ffd5970342c items=0 ppid=2938 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:16.695000 audit[3420]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:16.695000 audit[3420]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd59703440 a2=0 a3=0 items=0 ppid=2938 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:16.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:17.932337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903784403.mount: Deactivated successfully. Dec 16 03:15:18.170489 kubelet[2815]: E1216 03:15:18.165891 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:19.138244 containerd[1591]: time="2025-12-16T03:15:19.137595507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:19.147630 containerd[1591]: time="2025-12-16T03:15:19.147571368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 03:15:19.148261 containerd[1591]: time="2025-12-16T03:15:19.148205099Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:19.154549 containerd[1591]: time="2025-12-16T03:15:19.154459899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:19.155421 containerd[1591]: time="2025-12-16T03:15:19.155154900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.872070139s" Dec 16 03:15:19.155421 containerd[1591]: time="2025-12-16T03:15:19.155196110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:15:19.157002 containerd[1591]: time="2025-12-16T03:15:19.156973145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:15:19.179683 containerd[1591]: time="2025-12-16T03:15:19.179629493Z" level=info msg="CreateContainer within sandbox \"d52ef8f803db7a96b5f2c9cf2478b5d9bdaa0fed90e865593c296a0d9b39382b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:15:19.189280 containerd[1591]: time="2025-12-16T03:15:19.186108006Z" level=info msg="Container 94558b52881c5f5f4d24f8066f646d75f5ea7c7f79583e00e495546706b22539: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:19.194009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295277167.mount: Deactivated successfully. Dec 16 03:15:19.201382 containerd[1591]: time="2025-12-16T03:15:19.201323117Z" level=info msg="CreateContainer within sandbox \"d52ef8f803db7a96b5f2c9cf2478b5d9bdaa0fed90e865593c296a0d9b39382b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"94558b52881c5f5f4d24f8066f646d75f5ea7c7f79583e00e495546706b22539\"" Dec 16 03:15:19.202616 containerd[1591]: time="2025-12-16T03:15:19.202460727Z" level=info msg="StartContainer for \"94558b52881c5f5f4d24f8066f646d75f5ea7c7f79583e00e495546706b22539\"" Dec 16 03:15:19.204477 containerd[1591]: time="2025-12-16T03:15:19.204413242Z" level=info msg="connecting to shim 94558b52881c5f5f4d24f8066f646d75f5ea7c7f79583e00e495546706b22539" address="unix:///run/containerd/s/f5856155222aca4949d43940335a062b33ee897208dfb024f57dfaeafd9c68cc" protocol=ttrpc version=3 Dec 16 03:15:19.240145 systemd[1]: Started cri-containerd-94558b52881c5f5f4d24f8066f646d75f5ea7c7f79583e00e495546706b22539.scope - libcontainer container 94558b52881c5f5f4d24f8066f646d75f5ea7c7f79583e00e495546706b22539. Dec 16 03:15:19.263000 audit: BPF prog-id=159 op=LOAD Dec 16 03:15:19.264000 audit: BPF prog-id=160 op=LOAD Dec 16 03:15:19.264000 audit[3436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3251 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:19.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353538623532383831633566356634643234663830363666363436 Dec 16 03:15:19.264000 audit: BPF prog-id=160 op=UNLOAD Dec 16 03:15:19.264000 audit[3436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3251 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:19.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353538623532383831633566356634643234663830363666363436 Dec 16 03:15:19.264000 audit: BPF prog-id=161 op=LOAD Dec 16 03:15:19.264000 audit[3436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3251 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:19.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353538623532383831633566356634643234663830363666363436 Dec 16 03:15:19.265000 audit: BPF prog-id=162 op=LOAD Dec 16 03:15:19.265000 audit[3436]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3251 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:19.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353538623532383831633566356634643234663830363666363436 Dec 16 03:15:19.265000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:15:19.265000 audit[3436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3251 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:19.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353538623532383831633566356634643234663830363666363436 Dec 16 03:15:19.265000 audit: BPF prog-id=161 op=UNLOAD Dec 16 03:15:19.265000 audit[3436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3251 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:19.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353538623532383831633566356634643234663830363666363436 Dec 16 03:15:19.265000 audit: BPF prog-id=163 op=LOAD Dec 16 03:15:19.265000 audit[3436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3251 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:19.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353538623532383831633566356634643234663830363666363436 Dec 16 03:15:19.319537 containerd[1591]: time="2025-12-16T03:15:19.319481548Z" level=info msg="StartContainer for \"94558b52881c5f5f4d24f8066f646d75f5ea7c7f79583e00e495546706b22539\" returns successfully" Dec 16 03:15:20.178124 kubelet[2815]: E1216 03:15:20.178061 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:20.330108 kubelet[2815]: E1216 03:15:20.330063 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:20.407646 kubelet[2815]: E1216 03:15:20.407607 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.408095 kubelet[2815]: W1216 03:15:20.407960 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.408095 kubelet[2815]: E1216 03:15:20.408004 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.409313 kubelet[2815]: E1216 03:15:20.409095 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.409313 kubelet[2815]: W1216 03:15:20.409116 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.409313 kubelet[2815]: E1216 03:15:20.409156 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.409572 kubelet[2815]: E1216 03:15:20.409407 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.409572 kubelet[2815]: W1216 03:15:20.409418 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.409572 kubelet[2815]: E1216 03:15:20.409428 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.409724 kubelet[2815]: E1216 03:15:20.409704 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.409724 kubelet[2815]: W1216 03:15:20.409714 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.409831 kubelet[2815]: E1216 03:15:20.409724 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.410201 kubelet[2815]: E1216 03:15:20.409963 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.410201 kubelet[2815]: W1216 03:15:20.409974 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.410201 kubelet[2815]: E1216 03:15:20.409983 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.410417 kubelet[2815]: E1216 03:15:20.410401 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.410417 kubelet[2815]: W1216 03:15:20.410414 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.410587 kubelet[2815]: E1216 03:15:20.410435 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.410806 kubelet[2815]: E1216 03:15:20.410639 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.410806 kubelet[2815]: W1216 03:15:20.410663 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.410806 kubelet[2815]: E1216 03:15:20.410678 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.411141 kubelet[2815]: E1216 03:15:20.411123 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.411141 kubelet[2815]: W1216 03:15:20.411136 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.411263 kubelet[2815]: E1216 03:15:20.411147 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.411522 kubelet[2815]: E1216 03:15:20.411437 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.411522 kubelet[2815]: W1216 03:15:20.411453 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.411522 kubelet[2815]: E1216 03:15:20.411467 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.411973 kubelet[2815]: E1216 03:15:20.411886 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.411973 kubelet[2815]: W1216 03:15:20.411909 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.411973 kubelet[2815]: E1216 03:15:20.411921 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.412160 kubelet[2815]: E1216 03:15:20.412092 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.412160 kubelet[2815]: W1216 03:15:20.412100 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.412160 kubelet[2815]: E1216 03:15:20.412109 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.412607 kubelet[2815]: E1216 03:15:20.412273 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.412607 kubelet[2815]: W1216 03:15:20.412317 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.412607 kubelet[2815]: E1216 03:15:20.412329 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.412607 kubelet[2815]: E1216 03:15:20.412485 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.412607 kubelet[2815]: W1216 03:15:20.412492 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.412607 kubelet[2815]: E1216 03:15:20.412499 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.412951 kubelet[2815]: E1216 03:15:20.412935 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.412951 kubelet[2815]: W1216 03:15:20.412948 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.413054 kubelet[2815]: E1216 03:15:20.412959 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.413227 kubelet[2815]: E1216 03:15:20.413210 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.413227 kubelet[2815]: W1216 03:15:20.413224 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.413345 kubelet[2815]: E1216 03:15:20.413233 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.447131 kubelet[2815]: E1216 03:15:20.445961 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.447131 kubelet[2815]: W1216 03:15:20.445997 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.447131 kubelet[2815]: E1216 03:15:20.446028 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.447378 kubelet[2815]: E1216 03:15:20.447245 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.447378 kubelet[2815]: W1216 03:15:20.447265 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.447378 kubelet[2815]: E1216 03:15:20.447290 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.448918 kubelet[2815]: E1216 03:15:20.447882 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.448918 kubelet[2815]: W1216 03:15:20.447898 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.448918 kubelet[2815]: E1216 03:15:20.447967 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.448918 kubelet[2815]: E1216 03:15:20.448638 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.448918 kubelet[2815]: W1216 03:15:20.448656 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.448918 kubelet[2815]: E1216 03:15:20.448673 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.449372 kubelet[2815]: E1216 03:15:20.449346 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.449372 kubelet[2815]: W1216 03:15:20.449360 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.449687 kubelet[2815]: E1216 03:15:20.449664 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.450265 kubelet[2815]: E1216 03:15:20.450239 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.450265 kubelet[2815]: W1216 03:15:20.450259 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.450445 kubelet[2815]: E1216 03:15:20.450275 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.451158 kubelet[2815]: E1216 03:15:20.451039 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.451158 kubelet[2815]: W1216 03:15:20.451146 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.451158 kubelet[2815]: E1216 03:15:20.451161 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.451677 kubelet[2815]: E1216 03:15:20.451648 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.451677 kubelet[2815]: W1216 03:15:20.451671 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.451851 kubelet[2815]: E1216 03:15:20.451683 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.452133 kubelet[2815]: E1216 03:15:20.452117 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.452133 kubelet[2815]: W1216 03:15:20.452130 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.452248 kubelet[2815]: E1216 03:15:20.452141 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.452975 kubelet[2815]: E1216 03:15:20.452944 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.453069 kubelet[2815]: W1216 03:15:20.453055 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.453118 kubelet[2815]: E1216 03:15:20.453073 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.453538 kubelet[2815]: E1216 03:15:20.453519 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.453538 kubelet[2815]: W1216 03:15:20.453534 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.453647 kubelet[2815]: E1216 03:15:20.453546 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.454106 kubelet[2815]: E1216 03:15:20.454086 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.454192 kubelet[2815]: W1216 03:15:20.454104 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.454239 kubelet[2815]: E1216 03:15:20.454199 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.455026 kubelet[2815]: E1216 03:15:20.455007 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.455026 kubelet[2815]: W1216 03:15:20.455021 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.455153 kubelet[2815]: E1216 03:15:20.455033 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.456127 kubelet[2815]: E1216 03:15:20.456108 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.456127 kubelet[2815]: W1216 03:15:20.456122 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.456348 kubelet[2815]: E1216 03:15:20.456133 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.456483 kubelet[2815]: E1216 03:15:20.456467 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.456483 kubelet[2815]: W1216 03:15:20.456480 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.456591 kubelet[2815]: E1216 03:15:20.456492 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.457034 kubelet[2815]: E1216 03:15:20.457008 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.457034 kubelet[2815]: W1216 03:15:20.457022 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.457164 kubelet[2815]: E1216 03:15:20.457034 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.457436 kubelet[2815]: E1216 03:15:20.457354 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.457436 kubelet[2815]: W1216 03:15:20.457368 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.457436 kubelet[2815]: E1216 03:15:20.457392 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.458001 kubelet[2815]: E1216 03:15:20.457903 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:20.458001 kubelet[2815]: W1216 03:15:20.457915 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:20.458001 kubelet[2815]: E1216 03:15:20.457925 2815 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:20.470032 containerd[1591]: time="2025-12-16T03:15:20.469974863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:20.471556 containerd[1591]: time="2025-12-16T03:15:20.471504496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:20.472174 containerd[1591]: time="2025-12-16T03:15:20.472108152Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:20.474606 containerd[1591]: time="2025-12-16T03:15:20.474534581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:20.475587 containerd[1591]: time="2025-12-16T03:15:20.474977442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.31778571s" Dec 16 03:15:20.475587 containerd[1591]: time="2025-12-16T03:15:20.475023886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:15:20.481647 containerd[1591]: time="2025-12-16T03:15:20.481603543Z" level=info msg="CreateContainer within sandbox \"86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:15:20.491834 containerd[1591]: time="2025-12-16T03:15:20.491158251Z" level=info msg="Container 088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:20.498659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983428619.mount: Deactivated successfully. Dec 16 03:15:20.508229 containerd[1591]: time="2025-12-16T03:15:20.508131893Z" level=info msg="CreateContainer within sandbox \"86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400\"" Dec 16 03:15:20.509454 containerd[1591]: time="2025-12-16T03:15:20.509232314Z" level=info msg="StartContainer for \"088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400\"" Dec 16 03:15:20.515819 containerd[1591]: time="2025-12-16T03:15:20.515706848Z" level=info msg="connecting to shim 088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400" address="unix:///run/containerd/s/0f9d157a8ad47c444a4a0a6ba38f3e9ab934fde4a2c61882909aca95a56e534e" protocol=ttrpc version=3 Dec 16 03:15:20.563167 systemd[1]: Started cri-containerd-088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400.scope - libcontainer container 088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400. Dec 16 03:15:20.620000 audit: BPF prog-id=164 op=LOAD Dec 16 03:15:20.623702 kernel: kauditd_printk_skb: 74 callbacks suppressed Dec 16 03:15:20.623810 kernel: audit: type=1334 audit(1765854920.620:559): prog-id=164 op=LOAD Dec 16 03:15:20.620000 audit[3509]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3381 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:20.628988 kernel: audit: type=1300 audit(1765854920.620:559): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3381 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:20.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386364386131333831643565376338653361386138376438323565 Dec 16 03:15:20.623000 audit: BPF prog-id=165 op=LOAD Dec 16 03:15:20.640310 kernel: audit: type=1327 audit(1765854920.620:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386364386131333831643565376338653361386138376438323565 Dec 16 03:15:20.640394 kernel: audit: type=1334 audit(1765854920.623:560): prog-id=165 op=LOAD Dec 16 03:15:20.623000 audit[3509]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3381 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:20.643902 kernel: audit: type=1300 audit(1765854920.623:560): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3381 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:20.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386364386131333831643565376338653361386138376438323565 Dec 16 03:15:20.650555 kernel: audit: type=1327 audit(1765854920.623:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386364386131333831643565376338653361386138376438323565 Dec 16 03:15:20.624000 audit: BPF prog-id=165 op=UNLOAD Dec 16 03:15:20.654599 kernel: audit: type=1334 audit(1765854920.624:561): prog-id=165 op=UNLOAD Dec 16 03:15:20.624000 audit[3509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:20.658227 kernel: audit: type=1300 audit(1765854920.624:561): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:20.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386364386131333831643565376338653361386138376438323565 Dec 16 03:15:20.667919 kernel: audit: type=1327 audit(1765854920.624:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386364386131333831643565376338653361386138376438323565 Dec 16 03:15:20.624000 audit: BPF prog-id=164 op=UNLOAD Dec 16 03:15:20.670865 kernel: audit: type=1334 audit(1765854920.624:562): prog-id=164 op=UNLOAD Dec 16 03:15:20.624000 audit[3509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:20.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386364386131333831643565376338653361386138376438323565 Dec 16 03:15:20.624000 audit: BPF prog-id=166 op=LOAD Dec 16 03:15:20.624000 audit[3509]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3381 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:20.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038386364386131333831643565376338653361386138376438323565 Dec 16 03:15:20.706218 containerd[1591]: time="2025-12-16T03:15:20.705198480Z" level=info msg="StartContainer for \"088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400\" returns successfully" Dec 16 03:15:20.722912 systemd[1]: cri-containerd-088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400.scope: Deactivated successfully. Dec 16 03:15:20.723000 audit: BPF prog-id=166 op=UNLOAD Dec 16 03:15:20.763187 containerd[1591]: time="2025-12-16T03:15:20.763038863Z" level=info msg="received container exit event container_id:\"088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400\" id:\"088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400\" pid:3521 exited_at:{seconds:1765854920 nanos:729685115}" Dec 16 03:15:20.798453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-088cd8a1381d5e7c8e3a8a87d825e43892fa7cb8517481f6745cd9cd79a37400-rootfs.mount: Deactivated successfully. Dec 16 03:15:21.336055 kubelet[2815]: E1216 03:15:21.334863 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:21.337755 kubelet[2815]: I1216 03:15:21.337195 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:15:21.340044 kubelet[2815]: E1216 03:15:21.340004 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:21.341616 containerd[1591]: time="2025-12-16T03:15:21.341307089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:15:21.372884 kubelet[2815]: I1216 03:15:21.372715 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f698866d7-w4dxm" podStartSLOduration=3.498607188 podStartE2EDuration="6.372569708s" podCreationTimestamp="2025-12-16 03:15:15 +0000 UTC" firstStartedPulling="2025-12-16 03:15:16.282555158 +0000 UTC m=+22.348335224" lastFinishedPulling="2025-12-16 03:15:19.156517691 +0000 UTC m=+25.222297744" observedRunningTime="2025-12-16 03:15:20.365388995 +0000 UTC m=+26.431169101" watchObservedRunningTime="2025-12-16 03:15:21.372569708 +0000 UTC m=+27.438349785" Dec 16 03:15:22.165106 kubelet[2815]: E1216 03:15:22.164175 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:24.175106 kubelet[2815]: E1216 03:15:24.174563 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:25.029252 containerd[1591]: time="2025-12-16T03:15:25.029197356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:25.033818 containerd[1591]: time="2025-12-16T03:15:25.033658221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:15:25.035833 containerd[1591]: time="2025-12-16T03:15:25.034866247Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:25.046981 containerd[1591]: time="2025-12-16T03:15:25.046922774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:25.047512 containerd[1591]: time="2025-12-16T03:15:25.047390604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.706033111s" Dec 16 03:15:25.047593 containerd[1591]: time="2025-12-16T03:15:25.047516157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:15:25.052858 containerd[1591]: time="2025-12-16T03:15:25.052814755Z" level=info msg="CreateContainer within sandbox \"86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:15:25.062808 containerd[1591]: time="2025-12-16T03:15:25.062741435Z" level=info msg="Container eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:25.074905 containerd[1591]: time="2025-12-16T03:15:25.074847140Z" level=info msg="CreateContainer within sandbox \"86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019\"" Dec 16 03:15:25.075978 containerd[1591]: time="2025-12-16T03:15:25.075946132Z" level=info msg="StartContainer for \"eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019\"" Dec 16 03:15:25.078338 containerd[1591]: time="2025-12-16T03:15:25.078300320Z" level=info msg="connecting to shim eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019" address="unix:///run/containerd/s/0f9d157a8ad47c444a4a0a6ba38f3e9ab934fde4a2c61882909aca95a56e534e" protocol=ttrpc version=3 Dec 16 03:15:25.118136 systemd[1]: Started cri-containerd-eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019.scope - libcontainer container eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019. Dec 16 03:15:25.186000 audit: BPF prog-id=167 op=LOAD Dec 16 03:15:25.186000 audit[3567]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3381 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633237323438323336313533383266386636323337333661313464 Dec 16 03:15:25.186000 audit: BPF prog-id=168 op=LOAD Dec 16 03:15:25.186000 audit[3567]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3381 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633237323438323336313533383266386636323337333661313464 Dec 16 03:15:25.186000 audit: BPF prog-id=168 op=UNLOAD Dec 16 03:15:25.186000 audit[3567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633237323438323336313533383266386636323337333661313464 Dec 16 03:15:25.186000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:15:25.186000 audit[3567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633237323438323336313533383266386636323337333661313464 Dec 16 03:15:25.186000 audit: BPF prog-id=169 op=LOAD Dec 16 03:15:25.186000 audit[3567]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3381 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633237323438323336313533383266386636323337333661313464 Dec 16 03:15:25.212642 containerd[1591]: time="2025-12-16T03:15:25.212596191Z" level=info msg="StartContainer for \"eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019\" returns successfully" Dec 16 03:15:25.352938 kubelet[2815]: E1216 03:15:25.352897 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:25.823988 systemd[1]: cri-containerd-eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019.scope: Deactivated successfully. Dec 16 03:15:25.824315 systemd[1]: cri-containerd-eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019.scope: Consumed 592ms CPU time, 161.7M memory peak, 6.6M read from disk, 171.3M written to disk. Dec 16 03:15:25.827000 audit: BPF prog-id=169 op=UNLOAD Dec 16 03:15:25.829277 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 16 03:15:25.829337 kernel: audit: type=1334 audit(1765854925.827:570): prog-id=169 op=UNLOAD Dec 16 03:15:25.839009 containerd[1591]: time="2025-12-16T03:15:25.838890575Z" level=info msg="received container exit event container_id:\"eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019\" id:\"eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019\" pid:3580 exited_at:{seconds:1765854925 nanos:827508120}" Dec 16 03:15:25.908568 kubelet[2815]: I1216 03:15:25.907348 2815 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:15:25.930663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eac2724823615382f8f623736a14d319e3f04ee2c91c9d990a646fa72cd27019-rootfs.mount: Deactivated successfully. Dec 16 03:15:26.003592 systemd[1]: Created slice kubepods-burstable-pod3c29e9a8_e273_4ab1_b741_61dd6cf59264.slice - libcontainer container kubepods-burstable-pod3c29e9a8_e273_4ab1_b741_61dd6cf59264.slice. Dec 16 03:15:26.020752 systemd[1]: Created slice kubepods-besteffort-pode282fbeb_8283_4c6d_8776_2743ee6a3341.slice - libcontainer container kubepods-besteffort-pode282fbeb_8283_4c6d_8776_2743ee6a3341.slice. Dec 16 03:15:26.037462 systemd[1]: Created slice kubepods-burstable-poddd9d79d8_7a6e_4723_b884_0936ba21acc5.slice - libcontainer container kubepods-burstable-poddd9d79d8_7a6e_4723_b884_0936ba21acc5.slice. Dec 16 03:15:26.055856 systemd[1]: Created slice kubepods-besteffort-poda48216b5_38ee_49b5_8de6_29119905fab4.slice - libcontainer container kubepods-besteffort-poda48216b5_38ee_49b5_8de6_29119905fab4.slice. Dec 16 03:15:26.066266 systemd[1]: Created slice kubepods-besteffort-pod72811ce9_36e4_402b_b055_27281a8b884a.slice - libcontainer container kubepods-besteffort-pod72811ce9_36e4_402b_b055_27281a8b884a.slice. Dec 16 03:15:26.081985 systemd[1]: Created slice kubepods-besteffort-podb1b86abd_1663_4902_987a_286c262c84b0.slice - libcontainer container kubepods-besteffort-podb1b86abd_1663_4902_987a_286c262c84b0.slice. Dec 16 03:15:26.088997 kubelet[2815]: I1216 03:15:26.088963 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e282fbeb-8283-4c6d-8776-2743ee6a3341-tigera-ca-bundle\") pod \"calico-kube-controllers-7944db5ff7-jqshs\" (UID: \"e282fbeb-8283-4c6d-8776-2743ee6a3341\") " pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" Dec 16 03:15:26.090989 kubelet[2815]: I1216 03:15:26.090846 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4sbz\" (UniqueName: \"kubernetes.io/projected/a48216b5-38ee-49b5-8de6-29119905fab4-kube-api-access-x4sbz\") pod \"calico-apiserver-b66776c65-79frt\" (UID: \"a48216b5-38ee-49b5-8de6-29119905fab4\") " pod="calico-apiserver/calico-apiserver-b66776c65-79frt" Dec 16 03:15:26.092806 kubelet[2815]: I1216 03:15:26.091155 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3fd5d06e-1026-4a73-b6dc-d83cdf5f2977-calico-apiserver-certs\") pod \"calico-apiserver-b66776c65-kfd2d\" (UID: \"3fd5d06e-1026-4a73-b6dc-d83cdf5f2977\") " pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" Dec 16 03:15:26.092806 kubelet[2815]: I1216 03:15:26.091191 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8h2x\" (UniqueName: \"kubernetes.io/projected/e282fbeb-8283-4c6d-8776-2743ee6a3341-kube-api-access-w8h2x\") pod \"calico-kube-controllers-7944db5ff7-jqshs\" (UID: \"e282fbeb-8283-4c6d-8776-2743ee6a3341\") " pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" Dec 16 03:15:26.092806 kubelet[2815]: I1216 03:15:26.091351 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw7kr\" (UniqueName: \"kubernetes.io/projected/3fd5d06e-1026-4a73-b6dc-d83cdf5f2977-kube-api-access-fw7kr\") pod \"calico-apiserver-b66776c65-kfd2d\" (UID: \"3fd5d06e-1026-4a73-b6dc-d83cdf5f2977\") " pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" Dec 16 03:15:26.092806 kubelet[2815]: I1216 03:15:26.091388 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmskc\" (UniqueName: \"kubernetes.io/projected/dd9d79d8-7a6e-4723-b884-0936ba21acc5-kube-api-access-hmskc\") pod \"coredns-674b8bbfcf-r9lx6\" (UID: \"dd9d79d8-7a6e-4723-b884-0936ba21acc5\") " pod="kube-system/coredns-674b8bbfcf-r9lx6" Dec 16 03:15:26.092806 kubelet[2815]: I1216 03:15:26.091409 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rczmg\" (UniqueName: \"kubernetes.io/projected/72811ce9-36e4-402b-b055-27281a8b884a-kube-api-access-rczmg\") pod \"whisker-588479bb49-sqqpm\" (UID: \"72811ce9-36e4-402b-b055-27281a8b884a\") " pod="calico-system/whisker-588479bb49-sqqpm" Dec 16 03:15:26.093042 kubelet[2815]: I1216 03:15:26.091552 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b86abd-1663-4902-987a-286c262c84b0-goldmane-ca-bundle\") pod \"goldmane-666569f655-9g7b7\" (UID: \"b1b86abd-1663-4902-987a-286c262c84b0\") " pod="calico-system/goldmane-666569f655-9g7b7" Dec 16 03:15:26.093042 kubelet[2815]: I1216 03:15:26.091576 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a48216b5-38ee-49b5-8de6-29119905fab4-calico-apiserver-certs\") pod \"calico-apiserver-b66776c65-79frt\" (UID: \"a48216b5-38ee-49b5-8de6-29119905fab4\") " pod="calico-apiserver/calico-apiserver-b66776c65-79frt" Dec 16 03:15:26.093042 kubelet[2815]: I1216 03:15:26.091594 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b1b86abd-1663-4902-987a-286c262c84b0-goldmane-key-pair\") pod \"goldmane-666569f655-9g7b7\" (UID: \"b1b86abd-1663-4902-987a-286c262c84b0\") " pod="calico-system/goldmane-666569f655-9g7b7" Dec 16 03:15:26.093042 kubelet[2815]: I1216 03:15:26.091728 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c29e9a8-e273-4ab1-b741-61dd6cf59264-config-volume\") pod \"coredns-674b8bbfcf-8tqdq\" (UID: \"3c29e9a8-e273-4ab1-b741-61dd6cf59264\") " pod="kube-system/coredns-674b8bbfcf-8tqdq" Dec 16 03:15:26.093042 kubelet[2815]: I1216 03:15:26.091752 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72811ce9-36e4-402b-b055-27281a8b884a-whisker-ca-bundle\") pod \"whisker-588479bb49-sqqpm\" (UID: \"72811ce9-36e4-402b-b055-27281a8b884a\") " pod="calico-system/whisker-588479bb49-sqqpm" Dec 16 03:15:26.093184 kubelet[2815]: I1216 03:15:26.091768 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qnxl\" (UniqueName: \"kubernetes.io/projected/3c29e9a8-e273-4ab1-b741-61dd6cf59264-kube-api-access-5qnxl\") pod \"coredns-674b8bbfcf-8tqdq\" (UID: \"3c29e9a8-e273-4ab1-b741-61dd6cf59264\") " pod="kube-system/coredns-674b8bbfcf-8tqdq" Dec 16 03:15:26.093184 kubelet[2815]: I1216 03:15:26.091882 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd9d79d8-7a6e-4723-b884-0936ba21acc5-config-volume\") pod \"coredns-674b8bbfcf-r9lx6\" (UID: \"dd9d79d8-7a6e-4723-b884-0936ba21acc5\") " pod="kube-system/coredns-674b8bbfcf-r9lx6" Dec 16 03:15:26.093184 kubelet[2815]: I1216 03:15:26.091904 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72811ce9-36e4-402b-b055-27281a8b884a-whisker-backend-key-pair\") pod \"whisker-588479bb49-sqqpm\" (UID: \"72811ce9-36e4-402b-b055-27281a8b884a\") " pod="calico-system/whisker-588479bb49-sqqpm" Dec 16 03:15:26.093184 kubelet[2815]: I1216 03:15:26.091925 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b86abd-1663-4902-987a-286c262c84b0-config\") pod \"goldmane-666569f655-9g7b7\" (UID: \"b1b86abd-1663-4902-987a-286c262c84b0\") " pod="calico-system/goldmane-666569f655-9g7b7" Dec 16 03:15:26.093184 kubelet[2815]: I1216 03:15:26.092040 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6xr\" (UniqueName: \"kubernetes.io/projected/b1b86abd-1663-4902-987a-286c262c84b0-kube-api-access-fv6xr\") pod \"goldmane-666569f655-9g7b7\" (UID: \"b1b86abd-1663-4902-987a-286c262c84b0\") " pod="calico-system/goldmane-666569f655-9g7b7" Dec 16 03:15:26.101603 systemd[1]: Created slice kubepods-besteffort-pod3fd5d06e_1026_4a73_b6dc_d83cdf5f2977.slice - libcontainer container kubepods-besteffort-pod3fd5d06e_1026_4a73_b6dc_d83cdf5f2977.slice. Dec 16 03:15:26.174351 systemd[1]: Created slice kubepods-besteffort-pod2f0d2d2c_6593_4c7a_9cdf_35214b834c16.slice - libcontainer container kubepods-besteffort-pod2f0d2d2c_6593_4c7a_9cdf_35214b834c16.slice. Dec 16 03:15:26.180187 containerd[1591]: time="2025-12-16T03:15:26.180128751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jz64m,Uid:2f0d2d2c-6593-4c7a-9cdf-35214b834c16,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:26.316119 kubelet[2815]: E1216 03:15:26.316084 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:26.318285 containerd[1591]: time="2025-12-16T03:15:26.318162267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8tqdq,Uid:3c29e9a8-e273-4ab1-b741-61dd6cf59264,Namespace:kube-system,Attempt:0,}" Dec 16 03:15:26.348944 kubelet[2815]: E1216 03:15:26.348543 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:26.357686 containerd[1591]: time="2025-12-16T03:15:26.355829056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9lx6,Uid:dd9d79d8-7a6e-4723-b884-0936ba21acc5,Namespace:kube-system,Attempt:0,}" Dec 16 03:15:26.369870 containerd[1591]: time="2025-12-16T03:15:26.367741031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66776c65-79frt,Uid:a48216b5-38ee-49b5-8de6-29119905fab4,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:15:26.378826 containerd[1591]: time="2025-12-16T03:15:26.378421212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-588479bb49-sqqpm,Uid:72811ce9-36e4-402b-b055-27281a8b884a,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:26.411806 containerd[1591]: time="2025-12-16T03:15:26.408718445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9g7b7,Uid:b1b86abd-1663-4902-987a-286c262c84b0,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:26.411987 kubelet[2815]: E1216 03:15:26.409728 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:26.420843 containerd[1591]: time="2025-12-16T03:15:26.420146732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66776c65-kfd2d,Uid:3fd5d06e-1026-4a73-b6dc-d83cdf5f2977,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:15:26.426629 containerd[1591]: time="2025-12-16T03:15:26.426531756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:15:26.635260 containerd[1591]: time="2025-12-16T03:15:26.635134432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7944db5ff7-jqshs,Uid:e282fbeb-8283-4c6d-8776-2743ee6a3341,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:26.695847 containerd[1591]: time="2025-12-16T03:15:26.695393088Z" level=error msg="Failed to destroy network for sandbox \"72ac83e69fe6b55591d02cdd59f750a75e9e91f92d14326603953725777482ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.777537 containerd[1591]: time="2025-12-16T03:15:26.704334455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66776c65-kfd2d,Uid:3fd5d06e-1026-4a73-b6dc-d83cdf5f2977,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ac83e69fe6b55591d02cdd59f750a75e9e91f92d14326603953725777482ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.787845 containerd[1591]: time="2025-12-16T03:15:26.728300444Z" level=error msg="Failed to destroy network for sandbox \"90d3919ee4431e00e3e3137a103b85c833e840430f1aeb4ec2753c19aa9af3bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.789319 containerd[1591]: time="2025-12-16T03:15:26.789249652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8tqdq,Uid:3c29e9a8-e273-4ab1-b741-61dd6cf59264,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d3919ee4431e00e3e3137a103b85c833e840430f1aeb4ec2753c19aa9af3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.807099 kubelet[2815]: E1216 03:15:26.806101 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d3919ee4431e00e3e3137a103b85c833e840430f1aeb4ec2753c19aa9af3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.807099 kubelet[2815]: E1216 03:15:26.806186 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d3919ee4431e00e3e3137a103b85c833e840430f1aeb4ec2753c19aa9af3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8tqdq" Dec 16 03:15:26.807099 kubelet[2815]: E1216 03:15:26.806249 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d3919ee4431e00e3e3137a103b85c833e840430f1aeb4ec2753c19aa9af3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8tqdq" Dec 16 03:15:26.807975 kubelet[2815]: E1216 03:15:26.806327 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8tqdq_kube-system(3c29e9a8-e273-4ab1-b741-61dd6cf59264)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8tqdq_kube-system(3c29e9a8-e273-4ab1-b741-61dd6cf59264)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90d3919ee4431e00e3e3137a103b85c833e840430f1aeb4ec2753c19aa9af3bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8tqdq" podUID="3c29e9a8-e273-4ab1-b741-61dd6cf59264" Dec 16 03:15:26.807975 kubelet[2815]: E1216 03:15:26.806924 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ac83e69fe6b55591d02cdd59f750a75e9e91f92d14326603953725777482ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.807975 kubelet[2815]: E1216 03:15:26.806973 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ac83e69fe6b55591d02cdd59f750a75e9e91f92d14326603953725777482ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" Dec 16 03:15:26.808110 kubelet[2815]: E1216 03:15:26.807002 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72ac83e69fe6b55591d02cdd59f750a75e9e91f92d14326603953725777482ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" Dec 16 03:15:26.808110 kubelet[2815]: E1216 03:15:26.807048 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b66776c65-kfd2d_calico-apiserver(3fd5d06e-1026-4a73-b6dc-d83cdf5f2977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b66776c65-kfd2d_calico-apiserver(3fd5d06e-1026-4a73-b6dc-d83cdf5f2977)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72ac83e69fe6b55591d02cdd59f750a75e9e91f92d14326603953725777482ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" podUID="3fd5d06e-1026-4a73-b6dc-d83cdf5f2977" Dec 16 03:15:26.809946 containerd[1591]: time="2025-12-16T03:15:26.809636965Z" level=error msg="Failed to destroy network for sandbox \"93870bc37fcb4a4aab2893c3204eb59e20aa9d62d4ac9476edc93c752691b29f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.812926 containerd[1591]: time="2025-12-16T03:15:26.812818980Z" level=error msg="Failed to destroy network for sandbox \"94817062a654805c816e7df35681f7fcb8df3088cf96d47bcb15d5289fe64069\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.814739 containerd[1591]: time="2025-12-16T03:15:26.814633206Z" level=error msg="Failed to destroy network for sandbox \"61be3e4cd950fd5153b0879e877376b2402dcc5dbd734485ec87681b435fe57e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.817612 containerd[1591]: time="2025-12-16T03:15:26.817566145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jz64m,Uid:2f0d2d2c-6593-4c7a-9cdf-35214b834c16,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93870bc37fcb4a4aab2893c3204eb59e20aa9d62d4ac9476edc93c752691b29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.818723 kubelet[2815]: E1216 03:15:26.818479 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93870bc37fcb4a4aab2893c3204eb59e20aa9d62d4ac9476edc93c752691b29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.818723 kubelet[2815]: E1216 03:15:26.818548 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93870bc37fcb4a4aab2893c3204eb59e20aa9d62d4ac9476edc93c752691b29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jz64m" Dec 16 03:15:26.818723 kubelet[2815]: E1216 03:15:26.818588 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93870bc37fcb4a4aab2893c3204eb59e20aa9d62d4ac9476edc93c752691b29f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jz64m" Dec 16 03:15:26.819139 containerd[1591]: time="2025-12-16T03:15:26.818599035Z" level=error msg="Failed to destroy network for sandbox \"d69ea0a26b10b0c7f34eaef5970ce33e0429f21261872f2238b8f0d8a20b1764\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.819139 containerd[1591]: time="2025-12-16T03:15:26.818932824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9lx6,Uid:dd9d79d8-7a6e-4723-b884-0936ba21acc5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94817062a654805c816e7df35681f7fcb8df3088cf96d47bcb15d5289fe64069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.819244 kubelet[2815]: E1216 03:15:26.818642 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jz64m_calico-system(2f0d2d2c-6593-4c7a-9cdf-35214b834c16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jz64m_calico-system(2f0d2d2c-6593-4c7a-9cdf-35214b834c16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93870bc37fcb4a4aab2893c3204eb59e20aa9d62d4ac9476edc93c752691b29f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:26.820883 kubelet[2815]: E1216 03:15:26.819836 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94817062a654805c816e7df35681f7fcb8df3088cf96d47bcb15d5289fe64069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.820883 kubelet[2815]: E1216 03:15:26.820277 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94817062a654805c816e7df35681f7fcb8df3088cf96d47bcb15d5289fe64069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r9lx6" Dec 16 03:15:26.821930 kubelet[2815]: E1216 03:15:26.821556 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94817062a654805c816e7df35681f7fcb8df3088cf96d47bcb15d5289fe64069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r9lx6" Dec 16 03:15:26.821930 kubelet[2815]: E1216 03:15:26.821832 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r9lx6_kube-system(dd9d79d8-7a6e-4723-b884-0936ba21acc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r9lx6_kube-system(dd9d79d8-7a6e-4723-b884-0936ba21acc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94817062a654805c816e7df35681f7fcb8df3088cf96d47bcb15d5289fe64069\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r9lx6" podUID="dd9d79d8-7a6e-4723-b884-0936ba21acc5" Dec 16 03:15:26.826538 containerd[1591]: time="2025-12-16T03:15:26.826489021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-588479bb49-sqqpm,Uid:72811ce9-36e4-402b-b055-27281a8b884a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69ea0a26b10b0c7f34eaef5970ce33e0429f21261872f2238b8f0d8a20b1764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.827280 containerd[1591]: time="2025-12-16T03:15:26.826850740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66776c65-79frt,Uid:a48216b5-38ee-49b5-8de6-29119905fab4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61be3e4cd950fd5153b0879e877376b2402dcc5dbd734485ec87681b435fe57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.827407 kubelet[2815]: E1216 03:15:26.827069 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69ea0a26b10b0c7f34eaef5970ce33e0429f21261872f2238b8f0d8a20b1764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.827407 kubelet[2815]: E1216 03:15:26.827125 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69ea0a26b10b0c7f34eaef5970ce33e0429f21261872f2238b8f0d8a20b1764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-588479bb49-sqqpm" Dec 16 03:15:26.827407 kubelet[2815]: E1216 03:15:26.827149 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69ea0a26b10b0c7f34eaef5970ce33e0429f21261872f2238b8f0d8a20b1764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-588479bb49-sqqpm" Dec 16 03:15:26.827528 kubelet[2815]: E1216 03:15:26.827204 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-588479bb49-sqqpm_calico-system(72811ce9-36e4-402b-b055-27281a8b884a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-588479bb49-sqqpm_calico-system(72811ce9-36e4-402b-b055-27281a8b884a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d69ea0a26b10b0c7f34eaef5970ce33e0429f21261872f2238b8f0d8a20b1764\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-588479bb49-sqqpm" podUID="72811ce9-36e4-402b-b055-27281a8b884a" Dec 16 03:15:26.828556 kubelet[2815]: E1216 03:15:26.827772 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61be3e4cd950fd5153b0879e877376b2402dcc5dbd734485ec87681b435fe57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.828556 kubelet[2815]: E1216 03:15:26.827831 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61be3e4cd950fd5153b0879e877376b2402dcc5dbd734485ec87681b435fe57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" Dec 16 03:15:26.828556 kubelet[2815]: E1216 03:15:26.827849 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61be3e4cd950fd5153b0879e877376b2402dcc5dbd734485ec87681b435fe57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" Dec 16 03:15:26.828701 kubelet[2815]: E1216 03:15:26.827899 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b66776c65-79frt_calico-apiserver(a48216b5-38ee-49b5-8de6-29119905fab4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b66776c65-79frt_calico-apiserver(a48216b5-38ee-49b5-8de6-29119905fab4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61be3e4cd950fd5153b0879e877376b2402dcc5dbd734485ec87681b435fe57e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:15:26.838477 containerd[1591]: time="2025-12-16T03:15:26.838364029Z" level=error msg="Failed to destroy network for sandbox \"6a345a05db9d56ae51e4d4b525b937ae469dc74c6059af47e26bbc9d87c164c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.844823 containerd[1591]: time="2025-12-16T03:15:26.844667132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9g7b7,Uid:b1b86abd-1663-4902-987a-286c262c84b0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a345a05db9d56ae51e4d4b525b937ae469dc74c6059af47e26bbc9d87c164c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.845369 kubelet[2815]: E1216 03:15:26.845033 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a345a05db9d56ae51e4d4b525b937ae469dc74c6059af47e26bbc9d87c164c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.845369 kubelet[2815]: E1216 03:15:26.845094 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a345a05db9d56ae51e4d4b525b937ae469dc74c6059af47e26bbc9d87c164c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9g7b7" Dec 16 03:15:26.845369 kubelet[2815]: E1216 03:15:26.845136 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a345a05db9d56ae51e4d4b525b937ae469dc74c6059af47e26bbc9d87c164c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9g7b7" Dec 16 03:15:26.845481 kubelet[2815]: E1216 03:15:26.845188 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9g7b7_calico-system(b1b86abd-1663-4902-987a-286c262c84b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9g7b7_calico-system(b1b86abd-1663-4902-987a-286c262c84b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a345a05db9d56ae51e4d4b525b937ae469dc74c6059af47e26bbc9d87c164c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9g7b7" podUID="b1b86abd-1663-4902-987a-286c262c84b0" Dec 16 03:15:26.877121 containerd[1591]: time="2025-12-16T03:15:26.877038861Z" level=error msg="Failed to destroy network for sandbox \"6577b228210a69cf665abdeea3b688f2b895ce1964531ab8673c50cb80dd93e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.879522 containerd[1591]: time="2025-12-16T03:15:26.879437035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7944db5ff7-jqshs,Uid:e282fbeb-8283-4c6d-8776-2743ee6a3341,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6577b228210a69cf665abdeea3b688f2b895ce1964531ab8673c50cb80dd93e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.880099 kubelet[2815]: E1216 03:15:26.879991 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6577b228210a69cf665abdeea3b688f2b895ce1964531ab8673c50cb80dd93e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:26.880209 kubelet[2815]: E1216 03:15:26.880129 2815 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6577b228210a69cf665abdeea3b688f2b895ce1964531ab8673c50cb80dd93e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" Dec 16 03:15:26.880209 kubelet[2815]: E1216 03:15:26.880157 2815 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6577b228210a69cf665abdeea3b688f2b895ce1964531ab8673c50cb80dd93e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" Dec 16 03:15:26.880277 kubelet[2815]: E1216 03:15:26.880221 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7944db5ff7-jqshs_calico-system(e282fbeb-8283-4c6d-8776-2743ee6a3341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7944db5ff7-jqshs_calico-system(e282fbeb-8283-4c6d-8776-2743ee6a3341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6577b228210a69cf665abdeea3b688f2b895ce1964531ab8673c50cb80dd93e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" podUID="e282fbeb-8283-4c6d-8776-2743ee6a3341" Dec 16 03:15:27.209151 systemd[1]: run-netns-cni\x2d05653998\x2d3be5\x2d9ab3\x2d27de\x2d77cc7bc08eb4.mount: Deactivated successfully. Dec 16 03:15:34.222644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406937005.mount: Deactivated successfully. Dec 16 03:15:34.274528 containerd[1591]: time="2025-12-16T03:15:34.268687018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:15:34.278835 containerd[1591]: time="2025-12-16T03:15:34.263650084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:34.324771 containerd[1591]: time="2025-12-16T03:15:34.324716523Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:34.327703 containerd[1591]: time="2025-12-16T03:15:34.327660659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:34.328560 containerd[1591]: time="2025-12-16T03:15:34.328517370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.901868752s" Dec 16 03:15:34.328667 containerd[1591]: time="2025-12-16T03:15:34.328562984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:15:34.376807 containerd[1591]: time="2025-12-16T03:15:34.376730294Z" level=info msg="CreateContainer within sandbox \"86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:15:34.467178 containerd[1591]: time="2025-12-16T03:15:34.466400052Z" level=info msg="Container 11b70ea6d7d0107e4d5911cea3ab0f1e64ba4b3e5c03426cb053977362dc621c: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:34.472370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1813413861.mount: Deactivated successfully. Dec 16 03:15:34.520378 containerd[1591]: time="2025-12-16T03:15:34.520191990Z" level=info msg="CreateContainer within sandbox \"86325f6266e66a7730cf2d795a73063a83fc158f748b218a63224c7de4504fc5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"11b70ea6d7d0107e4d5911cea3ab0f1e64ba4b3e5c03426cb053977362dc621c\"" Dec 16 03:15:34.522463 containerd[1591]: time="2025-12-16T03:15:34.522022802Z" level=info msg="StartContainer for \"11b70ea6d7d0107e4d5911cea3ab0f1e64ba4b3e5c03426cb053977362dc621c\"" Dec 16 03:15:34.544847 containerd[1591]: time="2025-12-16T03:15:34.544128047Z" level=info msg="connecting to shim 11b70ea6d7d0107e4d5911cea3ab0f1e64ba4b3e5c03426cb053977362dc621c" address="unix:///run/containerd/s/0f9d157a8ad47c444a4a0a6ba38f3e9ab934fde4a2c61882909aca95a56e534e" protocol=ttrpc version=3 Dec 16 03:15:34.798887 systemd[1]: Started cri-containerd-11b70ea6d7d0107e4d5911cea3ab0f1e64ba4b3e5c03426cb053977362dc621c.scope - libcontainer container 11b70ea6d7d0107e4d5911cea3ab0f1e64ba4b3e5c03426cb053977362dc621c. Dec 16 03:15:34.864000 audit: BPF prog-id=170 op=LOAD Dec 16 03:15:34.868817 kernel: audit: type=1334 audit(1765854934.864:571): prog-id=170 op=LOAD Dec 16 03:15:34.864000 audit[3841]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3381 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.875948 kernel: audit: type=1300 audit(1765854934.864:571): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3381 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131623730656136643764303130376534643539313163656133616230 Dec 16 03:15:34.882053 kernel: audit: type=1327 audit(1765854934.864:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131623730656136643764303130376534643539313163656133616230 Dec 16 03:15:34.882246 kernel: audit: type=1334 audit(1765854934.867:572): prog-id=171 op=LOAD Dec 16 03:15:34.867000 audit: BPF prog-id=171 op=LOAD Dec 16 03:15:34.867000 audit[3841]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3381 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131623730656136643764303130376534643539313163656133616230 Dec 16 03:15:34.890827 kernel: audit: type=1300 audit(1765854934.867:572): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3381 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.890955 kernel: audit: type=1327 audit(1765854934.867:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131623730656136643764303130376534643539313163656133616230 Dec 16 03:15:34.867000 audit: BPF prog-id=171 op=UNLOAD Dec 16 03:15:34.894223 kernel: audit: type=1334 audit(1765854934.867:573): prog-id=171 op=UNLOAD Dec 16 03:15:34.897046 kernel: audit: type=1300 audit(1765854934.867:573): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.867000 audit[3841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131623730656136643764303130376534643539313163656133616230 Dec 16 03:15:34.867000 audit: BPF prog-id=170 op=UNLOAD Dec 16 03:15:34.905205 kernel: audit: type=1327 audit(1765854934.867:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131623730656136643764303130376534643539313163656133616230 Dec 16 03:15:34.905295 kernel: audit: type=1334 audit(1765854934.867:574): prog-id=170 op=UNLOAD Dec 16 03:15:34.867000 audit[3841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131623730656136643764303130376534643539313163656133616230 Dec 16 03:15:34.867000 audit: BPF prog-id=172 op=LOAD Dec 16 03:15:34.867000 audit[3841]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3381 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131623730656136643764303130376534643539313163656133616230 Dec 16 03:15:34.948486 containerd[1591]: time="2025-12-16T03:15:34.948402469Z" level=info msg="StartContainer for \"11b70ea6d7d0107e4d5911cea3ab0f1e64ba4b3e5c03426cb053977362dc621c\" returns successfully" Dec 16 03:15:35.175983 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:15:35.176637 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:15:35.517686 kubelet[2815]: E1216 03:15:35.517549 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:35.581320 kubelet[2815]: I1216 03:15:35.580978 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rczmg\" (UniqueName: \"kubernetes.io/projected/72811ce9-36e4-402b-b055-27281a8b884a-kube-api-access-rczmg\") pod \"72811ce9-36e4-402b-b055-27281a8b884a\" (UID: \"72811ce9-36e4-402b-b055-27281a8b884a\") " Dec 16 03:15:35.581320 kubelet[2815]: I1216 03:15:35.581047 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72811ce9-36e4-402b-b055-27281a8b884a-whisker-ca-bundle\") pod \"72811ce9-36e4-402b-b055-27281a8b884a\" (UID: \"72811ce9-36e4-402b-b055-27281a8b884a\") " Dec 16 03:15:35.585091 kubelet[2815]: I1216 03:15:35.584215 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72811ce9-36e4-402b-b055-27281a8b884a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "72811ce9-36e4-402b-b055-27281a8b884a" (UID: "72811ce9-36e4-402b-b055-27281a8b884a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:15:35.586261 kubelet[2815]: I1216 03:15:35.586159 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72811ce9-36e4-402b-b055-27281a8b884a-whisker-backend-key-pair\") pod \"72811ce9-36e4-402b-b055-27281a8b884a\" (UID: \"72811ce9-36e4-402b-b055-27281a8b884a\") " Dec 16 03:15:35.586460 kubelet[2815]: I1216 03:15:35.586388 2815 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72811ce9-36e4-402b-b055-27281a8b884a-whisker-ca-bundle\") on node \"ci-4547.0.0-7-1189c174c4\" DevicePath \"\"" Dec 16 03:15:35.644950 systemd[1]: var-lib-kubelet-pods-72811ce9\x2d36e4\x2d402b\x2db055\x2d27281a8b884a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:15:35.653856 kubelet[2815]: I1216 03:15:35.653062 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72811ce9-36e4-402b-b055-27281a8b884a-kube-api-access-rczmg" (OuterVolumeSpecName: "kube-api-access-rczmg") pod "72811ce9-36e4-402b-b055-27281a8b884a" (UID: "72811ce9-36e4-402b-b055-27281a8b884a"). InnerVolumeSpecName "kube-api-access-rczmg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:15:35.657187 kubelet[2815]: I1216 03:15:35.657110 2815 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72811ce9-36e4-402b-b055-27281a8b884a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "72811ce9-36e4-402b-b055-27281a8b884a" (UID: "72811ce9-36e4-402b-b055-27281a8b884a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:15:35.659115 systemd[1]: var-lib-kubelet-pods-72811ce9\x2d36e4\x2d402b\x2db055\x2d27281a8b884a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drczmg.mount: Deactivated successfully. Dec 16 03:15:35.687230 kubelet[2815]: I1216 03:15:35.687185 2815 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72811ce9-36e4-402b-b055-27281a8b884a-whisker-backend-key-pair\") on node \"ci-4547.0.0-7-1189c174c4\" DevicePath \"\"" Dec 16 03:15:35.687230 kubelet[2815]: I1216 03:15:35.687226 2815 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rczmg\" (UniqueName: \"kubernetes.io/projected/72811ce9-36e4-402b-b055-27281a8b884a-kube-api-access-rczmg\") on node \"ci-4547.0.0-7-1189c174c4\" DevicePath \"\"" Dec 16 03:15:36.171856 systemd[1]: Removed slice kubepods-besteffort-pod72811ce9_36e4_402b_b055_27281a8b884a.slice - libcontainer container kubepods-besteffort-pod72811ce9_36e4_402b_b055_27281a8b884a.slice. Dec 16 03:15:36.522542 kubelet[2815]: E1216 03:15:36.522424 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:36.576674 kubelet[2815]: I1216 03:15:36.574927 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n9gwf" podStartSLOduration=3.741659624 podStartE2EDuration="21.574905397s" podCreationTimestamp="2025-12-16 03:15:15 +0000 UTC" firstStartedPulling="2025-12-16 03:15:16.496912102 +0000 UTC m=+22.562692156" lastFinishedPulling="2025-12-16 03:15:34.330157872 +0000 UTC m=+40.395937929" observedRunningTime="2025-12-16 03:15:35.59069978 +0000 UTC m=+41.656479855" watchObservedRunningTime="2025-12-16 03:15:36.574905397 +0000 UTC m=+42.640685469" Dec 16 03:15:36.689033 systemd[1]: Created slice kubepods-besteffort-pod4ef57fb4_ee02_432e_a890_17cb5351bf0b.slice - libcontainer container kubepods-besteffort-pod4ef57fb4_ee02_432e_a890_17cb5351bf0b.slice. Dec 16 03:15:36.796086 kubelet[2815]: I1216 03:15:36.795911 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ef57fb4-ee02-432e-a890-17cb5351bf0b-whisker-backend-key-pair\") pod \"whisker-78c76c6f97-2ccvf\" (UID: \"4ef57fb4-ee02-432e-a890-17cb5351bf0b\") " pod="calico-system/whisker-78c76c6f97-2ccvf" Dec 16 03:15:36.796086 kubelet[2815]: I1216 03:15:36.796041 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ef57fb4-ee02-432e-a890-17cb5351bf0b-whisker-ca-bundle\") pod \"whisker-78c76c6f97-2ccvf\" (UID: \"4ef57fb4-ee02-432e-a890-17cb5351bf0b\") " pod="calico-system/whisker-78c76c6f97-2ccvf" Dec 16 03:15:36.796316 kubelet[2815]: I1216 03:15:36.796097 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz29d\" (UniqueName: \"kubernetes.io/projected/4ef57fb4-ee02-432e-a890-17cb5351bf0b-kube-api-access-qz29d\") pod \"whisker-78c76c6f97-2ccvf\" (UID: \"4ef57fb4-ee02-432e-a890-17cb5351bf0b\") " pod="calico-system/whisker-78c76c6f97-2ccvf" Dec 16 03:15:36.993862 containerd[1591]: time="2025-12-16T03:15:36.993768342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c76c6f97-2ccvf,Uid:4ef57fb4-ee02-432e-a890-17cb5351bf0b,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:37.165449 kubelet[2815]: E1216 03:15:37.165399 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:37.167561 containerd[1591]: time="2025-12-16T03:15:37.167468736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9lx6,Uid:dd9d79d8-7a6e-4723-b884-0936ba21acc5,Namespace:kube-system,Attempt:0,}" Dec 16 03:15:37.447626 systemd-networkd[1498]: cali05c4d2758e0: Link UP Dec 16 03:15:37.454851 systemd-networkd[1498]: cali05c4d2758e0: Gained carrier Dec 16 03:15:37.507294 containerd[1591]: 2025-12-16 03:15:37.071 [INFO][3999] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:15:37.507294 containerd[1591]: 2025-12-16 03:15:37.108 [INFO][3999] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0 whisker-78c76c6f97- calico-system 4ef57fb4-ee02-432e-a890-17cb5351bf0b 964 0 2025-12-16 03:15:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78c76c6f97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-7-1189c174c4 whisker-78c76c6f97-2ccvf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali05c4d2758e0 [] [] }} ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Namespace="calico-system" Pod="whisker-78c76c6f97-2ccvf" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-" Dec 16 03:15:37.507294 containerd[1591]: 2025-12-16 03:15:37.109 [INFO][3999] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Namespace="calico-system" Pod="whisker-78c76c6f97-2ccvf" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" Dec 16 03:15:37.507294 containerd[1591]: 2025-12-16 03:15:37.334 [INFO][4035] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" HandleID="k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Workload="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.343 [INFO][4035] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" HandleID="k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Workload="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003439c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-7-1189c174c4", "pod":"whisker-78c76c6f97-2ccvf", "timestamp":"2025-12-16 03:15:37.334716165 +0000 UTC"}, Hostname:"ci-4547.0.0-7-1189c174c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.343 [INFO][4035] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.345 [INFO][4035] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.345 [INFO][4035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-7-1189c174c4' Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.365 [INFO][4035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.385 [INFO][4035] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.394 [INFO][4035] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.397 [INFO][4035] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.507697 containerd[1591]: 2025-12-16 03:15:37.401 [INFO][4035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.508314 containerd[1591]: 2025-12-16 03:15:37.401 [INFO][4035] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.508314 containerd[1591]: 2025-12-16 03:15:37.403 [INFO][4035] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca Dec 16 03:15:37.508314 containerd[1591]: 2025-12-16 03:15:37.412 [INFO][4035] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.508314 containerd[1591]: 2025-12-16 03:15:37.420 [INFO][4035] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.65/26] block=192.168.67.64/26 handle="k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.508314 containerd[1591]: 2025-12-16 03:15:37.420 [INFO][4035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.65/26] handle="k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.508314 containerd[1591]: 2025-12-16 03:15:37.420 [INFO][4035] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:37.508314 containerd[1591]: 2025-12-16 03:15:37.420 [INFO][4035] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.65/26] IPv6=[] ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" HandleID="k8s-pod-network.70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Workload="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" Dec 16 03:15:37.508995 containerd[1591]: 2025-12-16 03:15:37.426 [INFO][3999] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Namespace="calico-system" Pod="whisker-78c76c6f97-2ccvf" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0", GenerateName:"whisker-78c76c6f97-", Namespace:"calico-system", SelfLink:"", UID:"4ef57fb4-ee02-432e-a890-17cb5351bf0b", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c76c6f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"", Pod:"whisker-78c76c6f97-2ccvf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali05c4d2758e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:37.508995 containerd[1591]: 2025-12-16 03:15:37.427 [INFO][3999] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.65/32] ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Namespace="calico-system" Pod="whisker-78c76c6f97-2ccvf" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" Dec 16 03:15:37.509410 containerd[1591]: 2025-12-16 03:15:37.427 [INFO][3999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05c4d2758e0 ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Namespace="calico-system" Pod="whisker-78c76c6f97-2ccvf" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" Dec 16 03:15:37.509410 containerd[1591]: 2025-12-16 03:15:37.460 [INFO][3999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Namespace="calico-system" Pod="whisker-78c76c6f97-2ccvf" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" Dec 16 03:15:37.509920 containerd[1591]: 2025-12-16 03:15:37.463 [INFO][3999] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Namespace="calico-system" Pod="whisker-78c76c6f97-2ccvf" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0", GenerateName:"whisker-78c76c6f97-", Namespace:"calico-system", SelfLink:"", UID:"4ef57fb4-ee02-432e-a890-17cb5351bf0b", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c76c6f97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca", Pod:"whisker-78c76c6f97-2ccvf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali05c4d2758e0", MAC:"32:12:c3:9d:e5:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:37.510201 containerd[1591]: 2025-12-16 03:15:37.491 [INFO][3999] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" Namespace="calico-system" Pod="whisker-78c76c6f97-2ccvf" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-whisker--78c76c6f97--2ccvf-eth0" Dec 16 03:15:37.532511 kubelet[2815]: E1216 03:15:37.532446 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:37.649758 systemd-networkd[1498]: cali64b2eb0eaea: Link UP Dec 16 03:15:37.652118 systemd-networkd[1498]: cali64b2eb0eaea: Gained carrier Dec 16 03:15:37.703673 containerd[1591]: 2025-12-16 03:15:37.238 [INFO][4057] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:15:37.703673 containerd[1591]: 2025-12-16 03:15:37.267 [INFO][4057] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0 coredns-674b8bbfcf- kube-system dd9d79d8-7a6e-4723-b884-0936ba21acc5 889 0 2025-12-16 03:14:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-7-1189c174c4 coredns-674b8bbfcf-r9lx6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali64b2eb0eaea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lx6" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-" Dec 16 03:15:37.703673 containerd[1591]: 2025-12-16 03:15:37.267 [INFO][4057] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lx6" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" Dec 16 03:15:37.703673 containerd[1591]: 2025-12-16 03:15:37.353 [INFO][4072] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" HandleID="k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Workload="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.355 [INFO][4072] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" HandleID="k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Workload="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-7-1189c174c4", "pod":"coredns-674b8bbfcf-r9lx6", "timestamp":"2025-12-16 03:15:37.353381491 +0000 UTC"}, Hostname:"ci-4547.0.0-7-1189c174c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.358 [INFO][4072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.420 [INFO][4072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.422 [INFO][4072] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-7-1189c174c4' Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.471 [INFO][4072] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.502 [INFO][4072] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.521 [INFO][4072] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.530 [INFO][4072] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705279 containerd[1591]: 2025-12-16 03:15:37.539 [INFO][4072] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705526 containerd[1591]: 2025-12-16 03:15:37.539 [INFO][4072] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705526 containerd[1591]: 2025-12-16 03:15:37.553 [INFO][4072] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b Dec 16 03:15:37.705526 containerd[1591]: 2025-12-16 03:15:37.584 [INFO][4072] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705526 containerd[1591]: 2025-12-16 03:15:37.619 [INFO][4072] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.66/26] block=192.168.67.64/26 handle="k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705526 containerd[1591]: 2025-12-16 03:15:37.619 [INFO][4072] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.66/26] handle="k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:37.705526 containerd[1591]: 2025-12-16 03:15:37.619 [INFO][4072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:37.705526 containerd[1591]: 2025-12-16 03:15:37.619 [INFO][4072] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.66/26] IPv6=[] ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" HandleID="k8s-pod-network.30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Workload="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" Dec 16 03:15:37.705704 containerd[1591]: 2025-12-16 03:15:37.635 [INFO][4057] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lx6" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dd9d79d8-7a6e-4723-b884-0936ba21acc5", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"", Pod:"coredns-674b8bbfcf-r9lx6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64b2eb0eaea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:37.705704 containerd[1591]: 2025-12-16 03:15:37.636 [INFO][4057] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.66/32] ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lx6" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" Dec 16 03:15:37.705704 containerd[1591]: 2025-12-16 03:15:37.636 [INFO][4057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64b2eb0eaea ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lx6" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" Dec 16 03:15:37.705704 containerd[1591]: 2025-12-16 03:15:37.649 [INFO][4057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lx6" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" Dec 16 03:15:37.705704 containerd[1591]: 2025-12-16 03:15:37.651 [INFO][4057] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lx6" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dd9d79d8-7a6e-4723-b884-0936ba21acc5", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b", Pod:"coredns-674b8bbfcf-r9lx6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64b2eb0eaea", MAC:"9e:c5:28:a8:31:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:37.705704 containerd[1591]: 2025-12-16 03:15:37.690 [INFO][4057] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9lx6" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--r9lx6-eth0" Dec 16 03:15:37.783025 containerd[1591]: time="2025-12-16T03:15:37.782367638Z" level=info msg="connecting to shim 30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b" address="unix:///run/containerd/s/4511e627ef9b1d5081329e94ea77a8cb89a5fa13dbd1ac58802aa344ad39e3d3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:37.787587 containerd[1591]: time="2025-12-16T03:15:37.787521483Z" level=info msg="connecting to shim 70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca" address="unix:///run/containerd/s/55921cadd2bfd02c3f0f0805a9c04cb85cde11f4204e524266bcc1f998693b9e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:37.835357 systemd[1]: Started cri-containerd-70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca.scope - libcontainer container 70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca. Dec 16 03:15:37.845436 systemd[1]: Started cri-containerd-30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b.scope - libcontainer container 30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b. Dec 16 03:15:37.866000 audit: BPF prog-id=173 op=LOAD Dec 16 03:15:37.867000 audit: BPF prog-id=174 op=LOAD Dec 16 03:15:37.867000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4142 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330653639373265363463643761626635393761383834656133386264 Dec 16 03:15:37.867000 audit: BPF prog-id=174 op=UNLOAD Dec 16 03:15:37.867000 audit[4166]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330653639373265363463643761626635393761383834656133386264 Dec 16 03:15:37.867000 audit: BPF prog-id=175 op=LOAD Dec 16 03:15:37.867000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4142 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330653639373265363463643761626635393761383834656133386264 Dec 16 03:15:37.867000 audit: BPF prog-id=176 op=LOAD Dec 16 03:15:37.867000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4142 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330653639373265363463643761626635393761383834656133386264 Dec 16 03:15:37.867000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:15:37.867000 audit[4166]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330653639373265363463643761626635393761383834656133386264 Dec 16 03:15:37.867000 audit: BPF prog-id=175 op=UNLOAD Dec 16 03:15:37.867000 audit[4166]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330653639373265363463643761626635393761383834656133386264 Dec 16 03:15:37.867000 audit: BPF prog-id=177 op=LOAD Dec 16 03:15:37.867000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4142 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330653639373265363463643761626635393761383834656133386264 Dec 16 03:15:37.887000 audit: BPF prog-id=178 op=LOAD Dec 16 03:15:37.888000 audit: BPF prog-id=179 op=LOAD Dec 16 03:15:37.888000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4144 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626464376366333234646334373964626130346138383833623136 Dec 16 03:15:37.888000 audit: BPF prog-id=179 op=UNLOAD Dec 16 03:15:37.888000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626464376366333234646334373964626130346138383833623136 Dec 16 03:15:37.888000 audit: BPF prog-id=180 op=LOAD Dec 16 03:15:37.888000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4144 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626464376366333234646334373964626130346138383833623136 Dec 16 03:15:37.888000 audit: BPF prog-id=181 op=LOAD Dec 16 03:15:37.888000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4144 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626464376366333234646334373964626130346138383833623136 Dec 16 03:15:37.888000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:15:37.888000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626464376366333234646334373964626130346138383833623136 Dec 16 03:15:37.888000 audit: BPF prog-id=180 op=UNLOAD Dec 16 03:15:37.888000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4144 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626464376366333234646334373964626130346138383833623136 Dec 16 03:15:37.888000 audit: BPF prog-id=182 op=LOAD Dec 16 03:15:37.888000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4144 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:37.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730626464376366333234646334373964626130346138383833623136 Dec 16 03:15:37.930326 containerd[1591]: time="2025-12-16T03:15:37.930250725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9lx6,Uid:dd9d79d8-7a6e-4723-b884-0936ba21acc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b\"" Dec 16 03:15:37.932055 kubelet[2815]: E1216 03:15:37.932025 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:37.937022 containerd[1591]: time="2025-12-16T03:15:37.936946686Z" level=info msg="CreateContainer within sandbox \"30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:15:37.952615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342661803.mount: Deactivated successfully. Dec 16 03:15:37.953902 containerd[1591]: time="2025-12-16T03:15:37.953139572Z" level=info msg="Container 0d73dd651ebfdb48024a05c9ec1bc83bc85af39f38712fb15e89f41ac305e764: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:37.958274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501956447.mount: Deactivated successfully. Dec 16 03:15:37.970509 containerd[1591]: time="2025-12-16T03:15:37.970177636Z" level=info msg="CreateContainer within sandbox \"30e6972e64cd7abf597a884ea38bd86e1ab8d7b5e48e9d15bc62e2caa97cf10b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d73dd651ebfdb48024a05c9ec1bc83bc85af39f38712fb15e89f41ac305e764\"" Dec 16 03:15:37.971394 containerd[1591]: time="2025-12-16T03:15:37.971273308Z" level=info msg="StartContainer for \"0d73dd651ebfdb48024a05c9ec1bc83bc85af39f38712fb15e89f41ac305e764\"" Dec 16 03:15:37.973961 containerd[1591]: time="2025-12-16T03:15:37.973915221Z" level=info msg="connecting to shim 0d73dd651ebfdb48024a05c9ec1bc83bc85af39f38712fb15e89f41ac305e764" address="unix:///run/containerd/s/4511e627ef9b1d5081329e94ea77a8cb89a5fa13dbd1ac58802aa344ad39e3d3" protocol=ttrpc version=3 Dec 16 03:15:37.995370 containerd[1591]: time="2025-12-16T03:15:37.995300034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c76c6f97-2ccvf,Uid:4ef57fb4-ee02-432e-a890-17cb5351bf0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"70bdd7cf324dc479dba04a8883b16792dc24f0c99801b67287c3070c4058d6ca\"" Dec 16 03:15:37.998079 containerd[1591]: time="2025-12-16T03:15:37.998044478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:15:38.020074 systemd[1]: Started cri-containerd-0d73dd651ebfdb48024a05c9ec1bc83bc85af39f38712fb15e89f41ac305e764.scope - libcontainer container 0d73dd651ebfdb48024a05c9ec1bc83bc85af39f38712fb15e89f41ac305e764. Dec 16 03:15:38.043000 audit: BPF prog-id=183 op=LOAD Dec 16 03:15:38.043000 audit: BPF prog-id=184 op=LOAD Dec 16 03:15:38.043000 audit[4210]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4142 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064373364643635316562666462343830323461303563396563316263 Dec 16 03:15:38.044000 audit: BPF prog-id=184 op=UNLOAD Dec 16 03:15:38.044000 audit[4210]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064373364643635316562666462343830323461303563396563316263 Dec 16 03:15:38.044000 audit: BPF prog-id=185 op=LOAD Dec 16 03:15:38.044000 audit[4210]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4142 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064373364643635316562666462343830323461303563396563316263 Dec 16 03:15:38.044000 audit: BPF prog-id=186 op=LOAD Dec 16 03:15:38.044000 audit[4210]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4142 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064373364643635316562666462343830323461303563396563316263 Dec 16 03:15:38.045000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:15:38.045000 audit[4210]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064373364643635316562666462343830323461303563396563316263 Dec 16 03:15:38.045000 audit: BPF prog-id=185 op=UNLOAD Dec 16 03:15:38.045000 audit[4210]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064373364643635316562666462343830323461303563396563316263 Dec 16 03:15:38.045000 audit: BPF prog-id=187 op=LOAD Dec 16 03:15:38.045000 audit[4210]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4142 pid=4210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064373364643635316562666462343830323461303563396563316263 Dec 16 03:15:38.075134 containerd[1591]: time="2025-12-16T03:15:38.075016734Z" level=info msg="StartContainer for \"0d73dd651ebfdb48024a05c9ec1bc83bc85af39f38712fb15e89f41ac305e764\" returns successfully" Dec 16 03:15:38.165579 containerd[1591]: time="2025-12-16T03:15:38.165083042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66776c65-79frt,Uid:a48216b5-38ee-49b5-8de6-29119905fab4,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:15:38.167107 kubelet[2815]: I1216 03:15:38.167076 2815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72811ce9-36e4-402b-b055-27281a8b884a" path="/var/lib/kubelet/pods/72811ce9-36e4-402b-b055-27281a8b884a/volumes" Dec 16 03:15:38.349448 systemd-networkd[1498]: cali9ace3147f14: Link UP Dec 16 03:15:38.356477 systemd-networkd[1498]: cali9ace3147f14: Gained carrier Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.214 [INFO][4248] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.229 [INFO][4248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0 calico-apiserver-b66776c65- calico-apiserver a48216b5-38ee-49b5-8de6-29119905fab4 884 0 2025-12-16 03:15:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b66776c65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-7-1189c174c4 calico-apiserver-b66776c65-79frt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ace3147f14 [] [] }} ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-79frt" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.229 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-79frt" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.277 [INFO][4261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" HandleID="k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.278 [INFO][4261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" HandleID="k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-7-1189c174c4", "pod":"calico-apiserver-b66776c65-79frt", "timestamp":"2025-12-16 03:15:38.277317238 +0000 UTC"}, Hostname:"ci-4547.0.0-7-1189c174c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.278 [INFO][4261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.278 [INFO][4261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.278 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-7-1189c174c4' Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.290 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.298 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.305 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.309 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.314 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.314 [INFO][4261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.316 [INFO][4261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1 Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.323 [INFO][4261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.338 [INFO][4261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.67/26] block=192.168.67.64/26 handle="k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.338 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.67/26] handle="k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.338 [INFO][4261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:38.381026 containerd[1591]: 2025-12-16 03:15:38.338 [INFO][4261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.67/26] IPv6=[] ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" HandleID="k8s-pod-network.fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" Dec 16 03:15:38.385140 containerd[1591]: 2025-12-16 03:15:38.341 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-79frt" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0", GenerateName:"calico-apiserver-b66776c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"a48216b5-38ee-49b5-8de6-29119905fab4", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b66776c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"", Pod:"calico-apiserver-b66776c65-79frt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ace3147f14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:38.385140 containerd[1591]: 2025-12-16 03:15:38.342 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.67/32] ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-79frt" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" Dec 16 03:15:38.385140 containerd[1591]: 2025-12-16 03:15:38.342 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ace3147f14 ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-79frt" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" Dec 16 03:15:38.385140 containerd[1591]: 2025-12-16 03:15:38.356 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-79frt" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" Dec 16 03:15:38.385140 containerd[1591]: 2025-12-16 03:15:38.357 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-79frt" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0", GenerateName:"calico-apiserver-b66776c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"a48216b5-38ee-49b5-8de6-29119905fab4", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b66776c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1", Pod:"calico-apiserver-b66776c65-79frt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ace3147f14", MAC:"aa:42:2e:76:89:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:38.385140 containerd[1591]: 2025-12-16 03:15:38.374 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-79frt" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--79frt-eth0" Dec 16 03:15:38.390575 containerd[1591]: time="2025-12-16T03:15:38.390520961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:38.391149 containerd[1591]: time="2025-12-16T03:15:38.391099488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:15:38.391272 containerd[1591]: time="2025-12-16T03:15:38.391229059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:38.391561 kubelet[2815]: E1216 03:15:38.391512 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:15:38.395697 kubelet[2815]: E1216 03:15:38.395213 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:15:38.400842 kubelet[2815]: E1216 03:15:38.400675 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4b735df0f3c44edda470dc6f58182ab2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qz29d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c76c6f97-2ccvf_calico-system(4ef57fb4-ee02-432e-a890-17cb5351bf0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:38.416215 containerd[1591]: time="2025-12-16T03:15:38.416164294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:15:38.447322 containerd[1591]: time="2025-12-16T03:15:38.447199380Z" level=info msg="connecting to shim fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1" address="unix:///run/containerd/s/a43a3db66633b5243838e02de9b8ef768faebf313a9b7e257291c7febd5b5604" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:38.500085 systemd[1]: Started cri-containerd-fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1.scope - libcontainer container fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1. Dec 16 03:15:38.522000 audit: BPF prog-id=188 op=LOAD Dec 16 03:15:38.524000 audit: BPF prog-id=189 op=LOAD Dec 16 03:15:38.524000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4288 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663626233303534333036356237383931653335313261636563363935 Dec 16 03:15:38.524000 audit: BPF prog-id=189 op=UNLOAD Dec 16 03:15:38.524000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4288 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663626233303534333036356237383931653335313261636563363935 Dec 16 03:15:38.524000 audit: BPF prog-id=190 op=LOAD Dec 16 03:15:38.524000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4288 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663626233303534333036356237383931653335313261636563363935 Dec 16 03:15:38.524000 audit: BPF prog-id=191 op=LOAD Dec 16 03:15:38.524000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4288 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663626233303534333036356237383931653335313261636563363935 Dec 16 03:15:38.524000 audit: BPF prog-id=191 op=UNLOAD Dec 16 03:15:38.524000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4288 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663626233303534333036356237383931653335313261636563363935 Dec 16 03:15:38.524000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:15:38.524000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4288 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663626233303534333036356237383931653335313261636563363935 Dec 16 03:15:38.524000 audit: BPF prog-id=192 op=LOAD Dec 16 03:15:38.524000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4288 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663626233303534333036356237383931653335313261636563363935 Dec 16 03:15:38.541026 kubelet[2815]: E1216 03:15:38.540993 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:38.582253 kubelet[2815]: I1216 03:15:38.581691 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r9lx6" podStartSLOduration=40.581673253 podStartE2EDuration="40.581673253s" podCreationTimestamp="2025-12-16 03:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:15:38.560976712 +0000 UTC m=+44.626756781" watchObservedRunningTime="2025-12-16 03:15:38.581673253 +0000 UTC m=+44.647453325" Dec 16 03:15:38.615000 audit[4324]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:38.615000 audit[4324]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdd19d1c00 a2=0 a3=7ffdd19d1bec items=0 ppid=2938 pid=4324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:38.619000 audit[4324]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:38.619000 audit[4324]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd19d1c00 a2=0 a3=0 items=0 ppid=2938 pid=4324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:38.665653 containerd[1591]: time="2025-12-16T03:15:38.665534114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66776c65-79frt,Uid:a48216b5-38ee-49b5-8de6-29119905fab4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fcbb30543065b7891e3512acec6955d21ea45482b41a2df3b87f02df1ca605d1\"" Dec 16 03:15:38.675000 audit[4336]: NETFILTER_CFG table=filter:121 family=2 entries=19 op=nft_register_rule pid=4336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:38.675000 audit[4336]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe791bb630 a2=0 a3=7ffe791bb61c items=0 ppid=2938 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:38.678000 audit[4336]: NETFILTER_CFG table=nat:122 family=2 entries=33 op=nft_register_chain pid=4336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:38.678000 audit[4336]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffe791bb630 a2=0 a3=7ffe791bb61c items=0 ppid=2938 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:38.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:38.742925 systemd-networkd[1498]: cali05c4d2758e0: Gained IPv6LL Dec 16 03:15:38.751108 containerd[1591]: time="2025-12-16T03:15:38.750736188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:38.752230 containerd[1591]: time="2025-12-16T03:15:38.751984569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:15:38.752623 containerd[1591]: time="2025-12-16T03:15:38.752492056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:38.753497 kubelet[2815]: E1216 03:15:38.753021 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:15:38.753497 kubelet[2815]: E1216 03:15:38.753089 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:15:38.754065 kubelet[2815]: E1216 03:15:38.753768 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz29d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c76c6f97-2ccvf_calico-system(4ef57fb4-ee02-432e-a890-17cb5351bf0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:38.755254 containerd[1591]: time="2025-12-16T03:15:38.755153245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:15:38.755707 kubelet[2815]: E1216 03:15:38.755580 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c76c6f97-2ccvf" podUID="4ef57fb4-ee02-432e-a890-17cb5351bf0b" Dec 16 03:15:39.059173 containerd[1591]: time="2025-12-16T03:15:39.058158631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:39.060595 containerd[1591]: time="2025-12-16T03:15:39.060424361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:15:39.060595 containerd[1591]: time="2025-12-16T03:15:39.060565117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:39.060926 kubelet[2815]: E1216 03:15:39.060854 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:39.060993 kubelet[2815]: E1216 03:15:39.060939 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:39.061399 kubelet[2815]: E1216 03:15:39.061160 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4sbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b66776c65-79frt_calico-apiserver(a48216b5-38ee-49b5-8de6-29119905fab4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:39.063136 kubelet[2815]: E1216 03:15:39.063102 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:15:39.164832 kubelet[2815]: E1216 03:15:39.164693 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:39.166336 containerd[1591]: time="2025-12-16T03:15:39.166275080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8tqdq,Uid:3c29e9a8-e273-4ab1-b741-61dd6cf59264,Namespace:kube-system,Attempt:0,}" Dec 16 03:15:39.254998 systemd-networkd[1498]: cali64b2eb0eaea: Gained IPv6LL Dec 16 03:15:39.324055 systemd-networkd[1498]: calief33a3e75d0: Link UP Dec 16 03:15:39.325899 systemd-networkd[1498]: calief33a3e75d0: Gained carrier Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.203 [INFO][4346] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.220 [INFO][4346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0 coredns-674b8bbfcf- kube-system 3c29e9a8-e273-4ab1-b741-61dd6cf59264 879 0 2025-12-16 03:14:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-7-1189c174c4 coredns-674b8bbfcf-8tqdq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief33a3e75d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tqdq" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.220 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tqdq" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.259 [INFO][4358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" HandleID="k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Workload="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.259 [INFO][4358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" HandleID="k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Workload="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-7-1189c174c4", "pod":"coredns-674b8bbfcf-8tqdq", "timestamp":"2025-12-16 03:15:39.259248169 +0000 UTC"}, Hostname:"ci-4547.0.0-7-1189c174c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.259 [INFO][4358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.259 [INFO][4358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.259 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-7-1189c174c4' Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.270 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.277 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.284 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.287 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.290 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.290 [INFO][4358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.293 [INFO][4358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.300 [INFO][4358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.316 [INFO][4358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.68/26] block=192.168.67.64/26 handle="k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.316 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.68/26] handle="k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.317 [INFO][4358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:39.348931 containerd[1591]: 2025-12-16 03:15:39.317 [INFO][4358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.68/26] IPv6=[] ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" HandleID="k8s-pod-network.c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Workload="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" Dec 16 03:15:39.351436 containerd[1591]: 2025-12-16 03:15:39.319 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tqdq" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3c29e9a8-e273-4ab1-b741-61dd6cf59264", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"", Pod:"coredns-674b8bbfcf-8tqdq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief33a3e75d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:39.351436 containerd[1591]: 2025-12-16 03:15:39.320 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.68/32] ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tqdq" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" Dec 16 03:15:39.351436 containerd[1591]: 2025-12-16 03:15:39.320 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief33a3e75d0 ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tqdq" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" Dec 16 03:15:39.351436 containerd[1591]: 2025-12-16 03:15:39.327 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tqdq" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" Dec 16 03:15:39.351436 containerd[1591]: 2025-12-16 03:15:39.328 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tqdq" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3c29e9a8-e273-4ab1-b741-61dd6cf59264", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a", Pod:"coredns-674b8bbfcf-8tqdq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief33a3e75d0", MAC:"b6:8f:ff:7a:c4:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:39.351436 containerd[1591]: 2025-12-16 03:15:39.344 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tqdq" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-coredns--674b8bbfcf--8tqdq-eth0" Dec 16 03:15:39.377432 containerd[1591]: time="2025-12-16T03:15:39.376903900Z" level=info msg="connecting to shim c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a" address="unix:///run/containerd/s/d72592232d233126280e477933f00d3884c662e4f23c5063fe16f113be1daea2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:39.415271 systemd[1]: Started cri-containerd-c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a.scope - libcontainer container c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a. Dec 16 03:15:39.436000 audit: BPF prog-id=193 op=LOAD Dec 16 03:15:39.437000 audit: BPF prog-id=194 op=LOAD Dec 16 03:15:39.437000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331376530356334306235323637333536353262643130663939316431 Dec 16 03:15:39.438000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:15:39.438000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331376530356334306235323637333536353262643130663939316431 Dec 16 03:15:39.438000 audit: BPF prog-id=195 op=LOAD Dec 16 03:15:39.438000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331376530356334306235323637333536353262643130663939316431 Dec 16 03:15:39.438000 audit: BPF prog-id=196 op=LOAD Dec 16 03:15:39.438000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331376530356334306235323637333536353262643130663939316431 Dec 16 03:15:39.438000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:15:39.438000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331376530356334306235323637333536353262643130663939316431 Dec 16 03:15:39.438000 audit: BPF prog-id=195 op=UNLOAD Dec 16 03:15:39.438000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331376530356334306235323637333536353262643130663939316431 Dec 16 03:15:39.438000 audit: BPF prog-id=197 op=LOAD Dec 16 03:15:39.438000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331376530356334306235323637333536353262643130663939316431 Dec 16 03:15:39.484332 containerd[1591]: time="2025-12-16T03:15:39.484278373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8tqdq,Uid:3c29e9a8-e273-4ab1-b741-61dd6cf59264,Namespace:kube-system,Attempt:0,} returns sandbox id \"c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a\"" Dec 16 03:15:39.485930 kubelet[2815]: E1216 03:15:39.485898 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:39.491124 containerd[1591]: time="2025-12-16T03:15:39.491080444Z" level=info msg="CreateContainer within sandbox \"c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:15:39.499882 containerd[1591]: time="2025-12-16T03:15:39.499835544Z" level=info msg="Container 5f1f0c5a1cdceb10b74f700247a89820ed5168fb688608ba77d4ad1164f34be9: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:39.507314 containerd[1591]: time="2025-12-16T03:15:39.507272698Z" level=info msg="CreateContainer within sandbox \"c17e05c40b526735652bd10f991d1f00c077e2e86e00ee21d7e0e57359f56d5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f1f0c5a1cdceb10b74f700247a89820ed5168fb688608ba77d4ad1164f34be9\"" Dec 16 03:15:39.509482 containerd[1591]: time="2025-12-16T03:15:39.509440025Z" level=info msg="StartContainer for \"5f1f0c5a1cdceb10b74f700247a89820ed5168fb688608ba77d4ad1164f34be9\"" Dec 16 03:15:39.509956 systemd-networkd[1498]: cali9ace3147f14: Gained IPv6LL Dec 16 03:15:39.512534 containerd[1591]: time="2025-12-16T03:15:39.512488994Z" level=info msg="connecting to shim 5f1f0c5a1cdceb10b74f700247a89820ed5168fb688608ba77d4ad1164f34be9" address="unix:///run/containerd/s/d72592232d233126280e477933f00d3884c662e4f23c5063fe16f113be1daea2" protocol=ttrpc version=3 Dec 16 03:15:39.537554 systemd[1]: Started cri-containerd-5f1f0c5a1cdceb10b74f700247a89820ed5168fb688608ba77d4ad1164f34be9.scope - libcontainer container 5f1f0c5a1cdceb10b74f700247a89820ed5168fb688608ba77d4ad1164f34be9. Dec 16 03:15:39.555838 kubelet[2815]: E1216 03:15:39.555763 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:39.558364 kubelet[2815]: E1216 03:15:39.557924 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c76c6f97-2ccvf" podUID="4ef57fb4-ee02-432e-a890-17cb5351bf0b" Dec 16 03:15:39.559375 kubelet[2815]: E1216 03:15:39.559199 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:15:39.561000 audit: BPF prog-id=198 op=LOAD Dec 16 03:15:39.562000 audit: BPF prog-id=199 op=LOAD Dec 16 03:15:39.562000 audit[4414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4378 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566316630633561316364636562313062373466373030323437613839 Dec 16 03:15:39.562000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:15:39.562000 audit[4414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566316630633561316364636562313062373466373030323437613839 Dec 16 03:15:39.564000 audit: BPF prog-id=200 op=LOAD Dec 16 03:15:39.564000 audit[4414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4378 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566316630633561316364636562313062373466373030323437613839 Dec 16 03:15:39.564000 audit: BPF prog-id=201 op=LOAD Dec 16 03:15:39.564000 audit[4414]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4378 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566316630633561316364636562313062373466373030323437613839 Dec 16 03:15:39.564000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:15:39.564000 audit[4414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566316630633561316364636562313062373466373030323437613839 Dec 16 03:15:39.565000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:15:39.565000 audit[4414]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566316630633561316364636562313062373466373030323437613839 Dec 16 03:15:39.565000 audit: BPF prog-id=202 op=LOAD Dec 16 03:15:39.565000 audit[4414]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4378 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566316630633561316364636562313062373466373030323437613839 Dec 16 03:15:39.619105 containerd[1591]: time="2025-12-16T03:15:39.619055010Z" level=info msg="StartContainer for \"5f1f0c5a1cdceb10b74f700247a89820ed5168fb688608ba77d4ad1164f34be9\" returns successfully" Dec 16 03:15:39.789000 audit[4455]: NETFILTER_CFG table=filter:123 family=2 entries=16 op=nft_register_rule pid=4455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:39.789000 audit[4455]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd05e9fab0 a2=0 a3=7ffd05e9fa9c items=0 ppid=2938 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:39.796000 audit[4455]: NETFILTER_CFG table=nat:124 family=2 entries=18 op=nft_register_rule pid=4455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:39.796000 audit[4455]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffd05e9fab0 a2=0 a3=0 items=0 ppid=2938 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:39.908970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615709984.mount: Deactivated successfully. Dec 16 03:15:40.167526 containerd[1591]: time="2025-12-16T03:15:40.167031126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9g7b7,Uid:b1b86abd-1663-4902-987a-286c262c84b0,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:40.339974 systemd-networkd[1498]: calidf846fca254: Link UP Dec 16 03:15:40.340644 systemd-networkd[1498]: calidf846fca254: Gained carrier Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.210 [INFO][4470] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.224 [INFO][4470] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0 goldmane-666569f655- calico-system b1b86abd-1663-4902-987a-286c262c84b0 887 0 2025-12-16 03:15:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-7-1189c174c4 goldmane-666569f655-9g7b7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidf846fca254 [] [] }} ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Namespace="calico-system" Pod="goldmane-666569f655-9g7b7" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.224 [INFO][4470] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Namespace="calico-system" Pod="goldmane-666569f655-9g7b7" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.270 [INFO][4482] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" HandleID="k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Workload="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.270 [INFO][4482] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" HandleID="k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Workload="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-7-1189c174c4", "pod":"goldmane-666569f655-9g7b7", "timestamp":"2025-12-16 03:15:40.270133749 +0000 UTC"}, Hostname:"ci-4547.0.0-7-1189c174c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.270 [INFO][4482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.270 [INFO][4482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.270 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-7-1189c174c4' Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.281 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.289 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.299 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.302 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.306 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.306 [INFO][4482] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.309 [INFO][4482] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3 Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.319 [INFO][4482] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.329 [INFO][4482] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.69/26] block=192.168.67.64/26 handle="k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.329 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.69/26] handle="k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.329 [INFO][4482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:40.377623 containerd[1591]: 2025-12-16 03:15:40.329 [INFO][4482] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.69/26] IPv6=[] ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" HandleID="k8s-pod-network.cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Workload="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" Dec 16 03:15:40.378433 containerd[1591]: 2025-12-16 03:15:40.332 [INFO][4470] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Namespace="calico-system" Pod="goldmane-666569f655-9g7b7" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b1b86abd-1663-4902-987a-286c262c84b0", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"", Pod:"goldmane-666569f655-9g7b7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf846fca254", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:40.378433 containerd[1591]: 2025-12-16 03:15:40.332 [INFO][4470] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.69/32] ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Namespace="calico-system" Pod="goldmane-666569f655-9g7b7" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" Dec 16 03:15:40.378433 containerd[1591]: 2025-12-16 03:15:40.332 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf846fca254 ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Namespace="calico-system" Pod="goldmane-666569f655-9g7b7" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" Dec 16 03:15:40.378433 containerd[1591]: 2025-12-16 03:15:40.341 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Namespace="calico-system" Pod="goldmane-666569f655-9g7b7" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" Dec 16 03:15:40.378433 containerd[1591]: 2025-12-16 03:15:40.342 [INFO][4470] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Namespace="calico-system" Pod="goldmane-666569f655-9g7b7" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b1b86abd-1663-4902-987a-286c262c84b0", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3", Pod:"goldmane-666569f655-9g7b7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf846fca254", MAC:"5a:03:e5:be:6e:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:40.378433 containerd[1591]: 2025-12-16 03:15:40.372 [INFO][4470] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" Namespace="calico-system" Pod="goldmane-666569f655-9g7b7" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-goldmane--666569f655--9g7b7-eth0" Dec 16 03:15:40.407687 containerd[1591]: time="2025-12-16T03:15:40.407597969Z" level=info msg="connecting to shim cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3" address="unix:///run/containerd/s/27649cb824b19b17ce667d42586b9875a4f7ba3f9e3f1eb8f6fab74b3c86ef0c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:40.456265 systemd[1]: Started cri-containerd-cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3.scope - libcontainer container cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3. Dec 16 03:15:40.474000 audit: BPF prog-id=203 op=LOAD Dec 16 03:15:40.476157 kernel: kauditd_printk_skb: 155 callbacks suppressed Dec 16 03:15:40.476227 kernel: audit: type=1334 audit(1765854940.474:630): prog-id=203 op=LOAD Dec 16 03:15:40.477000 audit: BPF prog-id=204 op=LOAD Dec 16 03:15:40.480814 kernel: audit: type=1334 audit(1765854940.477:631): prog-id=204 op=LOAD Dec 16 03:15:40.480939 kernel: audit: type=1300 audit(1765854940.477:631): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.477000 audit[4514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.488837 kernel: audit: type=1327 audit(1765854940.477:631): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.477000 audit: BPF prog-id=204 op=UNLOAD Dec 16 03:15:40.477000 audit[4514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.491407 kernel: audit: type=1334 audit(1765854940.477:632): prog-id=204 op=UNLOAD Dec 16 03:15:40.491500 kernel: audit: type=1300 audit(1765854940.477:632): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.495200 kernel: audit: type=1327 audit(1765854940.477:632): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.477000 audit: BPF prog-id=205 op=LOAD Dec 16 03:15:40.498059 kernel: audit: type=1334 audit(1765854940.477:633): prog-id=205 op=LOAD Dec 16 03:15:40.477000 audit[4514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.499829 kernel: audit: type=1300 audit(1765854940.477:633): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.503651 kernel: audit: type=1327 audit(1765854940.477:633): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.477000 audit: BPF prog-id=206 op=LOAD Dec 16 03:15:40.477000 audit[4514]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.477000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:15:40.477000 audit[4514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.477000 audit: BPF prog-id=205 op=UNLOAD Dec 16 03:15:40.477000 audit[4514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.477000 audit: BPF prog-id=207 op=LOAD Dec 16 03:15:40.477000 audit[4514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4501 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353333306437626638393661616232303133346664646230616537 Dec 16 03:15:40.550314 containerd[1591]: time="2025-12-16T03:15:40.550197262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9g7b7,Uid:b1b86abd-1663-4902-987a-286c262c84b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd5330d7bf896aab20134fddb0ae70ea162a8bf8f083f4734dceca236fcb61a3\"" Dec 16 03:15:40.555103 containerd[1591]: time="2025-12-16T03:15:40.554856619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:15:40.562500 kubelet[2815]: E1216 03:15:40.562443 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:40.564207 kubelet[2815]: E1216 03:15:40.563726 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:40.565182 kubelet[2815]: E1216 03:15:40.565098 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:15:40.618758 kubelet[2815]: I1216 03:15:40.617259 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8tqdq" podStartSLOduration=42.617227195 podStartE2EDuration="42.617227195s" podCreationTimestamp="2025-12-16 03:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:15:40.597187246 +0000 UTC m=+46.662967319" watchObservedRunningTime="2025-12-16 03:15:40.617227195 +0000 UTC m=+46.683007272" Dec 16 03:15:40.678000 audit[4540]: NETFILTER_CFG table=filter:125 family=2 entries=16 op=nft_register_rule pid=4540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:40.678000 audit[4540]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeae825a00 a2=0 a3=7ffeae8259ec items=0 ppid=2938 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:40.684000 audit[4540]: NETFILTER_CFG table=nat:126 family=2 entries=42 op=nft_register_rule pid=4540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:40.684000 audit[4540]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffeae825a00 a2=0 a3=7ffeae8259ec items=0 ppid=2938 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:40.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:40.869814 containerd[1591]: time="2025-12-16T03:15:40.869664576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:40.871097 containerd[1591]: time="2025-12-16T03:15:40.870779950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:15:40.871097 containerd[1591]: time="2025-12-16T03:15:40.870972499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:40.872045 kubelet[2815]: E1216 03:15:40.871760 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:15:40.872342 kubelet[2815]: E1216 03:15:40.872193 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:15:40.876475 kubelet[2815]: E1216 03:15:40.876388 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9g7b7_calico-system(b1b86abd-1663-4902-987a-286c262c84b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:40.877736 kubelet[2815]: E1216 03:15:40.877649 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9g7b7" podUID="b1b86abd-1663-4902-987a-286c262c84b0" Dec 16 03:15:41.166859 containerd[1591]: time="2025-12-16T03:15:41.166663860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7944db5ff7-jqshs,Uid:e282fbeb-8283-4c6d-8776-2743ee6a3341,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:41.301943 systemd-networkd[1498]: calief33a3e75d0: Gained IPv6LL Dec 16 03:15:41.390443 systemd-networkd[1498]: cali3c90b728eda: Link UP Dec 16 03:15:41.392008 systemd-networkd[1498]: cali3c90b728eda: Gained carrier Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.239 [INFO][4562] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.267 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0 calico-kube-controllers-7944db5ff7- calico-system e282fbeb-8283-4c6d-8776-2743ee6a3341 888 0 2025-12-16 03:15:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7944db5ff7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-7-1189c174c4 calico-kube-controllers-7944db5ff7-jqshs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3c90b728eda [] [] }} ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Namespace="calico-system" Pod="calico-kube-controllers-7944db5ff7-jqshs" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.267 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Namespace="calico-system" Pod="calico-kube-controllers-7944db5ff7-jqshs" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.316 [INFO][4573] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" HandleID="k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.316 [INFO][4573] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" HandleID="k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-7-1189c174c4", "pod":"calico-kube-controllers-7944db5ff7-jqshs", "timestamp":"2025-12-16 03:15:41.316315413 +0000 UTC"}, Hostname:"ci-4547.0.0-7-1189c174c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.316 [INFO][4573] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.316 [INFO][4573] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.316 [INFO][4573] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-7-1189c174c4' Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.332 [INFO][4573] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.339 [INFO][4573] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.349 [INFO][4573] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.354 [INFO][4573] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.359 [INFO][4573] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.359 [INFO][4573] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.361 [INFO][4573] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3 Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.370 [INFO][4573] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.382 [INFO][4573] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.70/26] block=192.168.67.64/26 handle="k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.383 [INFO][4573] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.70/26] handle="k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.383 [INFO][4573] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:41.421997 containerd[1591]: 2025-12-16 03:15:41.383 [INFO][4573] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.70/26] IPv6=[] ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" HandleID="k8s-pod-network.6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" Dec 16 03:15:41.423779 containerd[1591]: 2025-12-16 03:15:41.386 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Namespace="calico-system" Pod="calico-kube-controllers-7944db5ff7-jqshs" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0", GenerateName:"calico-kube-controllers-7944db5ff7-", Namespace:"calico-system", SelfLink:"", UID:"e282fbeb-8283-4c6d-8776-2743ee6a3341", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7944db5ff7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"", Pod:"calico-kube-controllers-7944db5ff7-jqshs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c90b728eda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:41.423779 containerd[1591]: 2025-12-16 03:15:41.386 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.70/32] ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Namespace="calico-system" Pod="calico-kube-controllers-7944db5ff7-jqshs" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" Dec 16 03:15:41.423779 containerd[1591]: 2025-12-16 03:15:41.387 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c90b728eda ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Namespace="calico-system" Pod="calico-kube-controllers-7944db5ff7-jqshs" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" Dec 16 03:15:41.423779 containerd[1591]: 2025-12-16 03:15:41.393 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Namespace="calico-system" Pod="calico-kube-controllers-7944db5ff7-jqshs" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" Dec 16 03:15:41.423779 containerd[1591]: 2025-12-16 03:15:41.393 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Namespace="calico-system" Pod="calico-kube-controllers-7944db5ff7-jqshs" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0", GenerateName:"calico-kube-controllers-7944db5ff7-", Namespace:"calico-system", SelfLink:"", UID:"e282fbeb-8283-4c6d-8776-2743ee6a3341", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7944db5ff7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3", Pod:"calico-kube-controllers-7944db5ff7-jqshs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c90b728eda", MAC:"3e:4b:40:2a:b6:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:41.423779 containerd[1591]: 2025-12-16 03:15:41.418 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" Namespace="calico-system" Pod="calico-kube-controllers-7944db5ff7-jqshs" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--kube--controllers--7944db5ff7--jqshs-eth0" Dec 16 03:15:41.456026 containerd[1591]: time="2025-12-16T03:15:41.455975403Z" level=info msg="connecting to shim 6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3" address="unix:///run/containerd/s/ec6eaf1302dbb81a7d5bf4918d91b69a0ded00fae7fb2891aa88378490df898a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:41.493164 systemd[1]: Started cri-containerd-6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3.scope - libcontainer container 6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3. Dec 16 03:15:41.510000 audit: BPF prog-id=208 op=LOAD Dec 16 03:15:41.511000 audit: BPF prog-id=209 op=LOAD Dec 16 03:15:41.511000 audit[4604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4593 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638383563336164343739616166643866336639353961396566666334 Dec 16 03:15:41.511000 audit: BPF prog-id=209 op=UNLOAD Dec 16 03:15:41.511000 audit[4604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4593 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638383563336164343739616166643866336639353961396566666334 Dec 16 03:15:41.513000 audit: BPF prog-id=210 op=LOAD Dec 16 03:15:41.513000 audit[4604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4593 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638383563336164343739616166643866336639353961396566666334 Dec 16 03:15:41.513000 audit: BPF prog-id=211 op=LOAD Dec 16 03:15:41.513000 audit[4604]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4593 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638383563336164343739616166643866336639353961396566666334 Dec 16 03:15:41.513000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:15:41.513000 audit[4604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4593 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638383563336164343739616166643866336639353961396566666334 Dec 16 03:15:41.513000 audit: BPF prog-id=210 op=UNLOAD Dec 16 03:15:41.513000 audit[4604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4593 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638383563336164343739616166643866336639353961396566666334 Dec 16 03:15:41.513000 audit: BPF prog-id=212 op=LOAD Dec 16 03:15:41.513000 audit[4604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4593 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638383563336164343739616166643866336639353961396566666334 Dec 16 03:15:41.561977 containerd[1591]: time="2025-12-16T03:15:41.561937652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7944db5ff7-jqshs,Uid:e282fbeb-8283-4c6d-8776-2743ee6a3341,Namespace:calico-system,Attempt:0,} returns sandbox id \"6885c3ad479aafd8f3f959a9effc40948d02ec66e9d90dc3ac772839604cddd3\"" Dec 16 03:15:41.567381 containerd[1591]: time="2025-12-16T03:15:41.567339238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:15:41.568740 kubelet[2815]: E1216 03:15:41.568670 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:41.572069 kubelet[2815]: E1216 03:15:41.571878 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9g7b7" podUID="b1b86abd-1663-4902-987a-286c262c84b0" Dec 16 03:15:41.612000 audit[4631]: NETFILTER_CFG table=filter:127 family=2 entries=16 op=nft_register_rule pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:41.612000 audit[4631]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffebb2ac150 a2=0 a3=7ffebb2ac13c items=0 ppid=2938 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.612000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:41.622000 audit[4631]: NETFILTER_CFG table=nat:128 family=2 entries=54 op=nft_register_chain pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:41.622000 audit[4631]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffebb2ac150 a2=0 a3=7ffebb2ac13c items=0 ppid=2938 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:41.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:41.889307 containerd[1591]: time="2025-12-16T03:15:41.889214321Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:41.890138 containerd[1591]: time="2025-12-16T03:15:41.890056823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:15:41.890138 containerd[1591]: time="2025-12-16T03:15:41.890101525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:41.890625 kubelet[2815]: E1216 03:15:41.890359 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:15:41.890625 kubelet[2815]: E1216 03:15:41.890420 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:15:41.890625 kubelet[2815]: E1216 03:15:41.890568 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8h2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7944db5ff7-jqshs_calico-system(e282fbeb-8283-4c6d-8776-2743ee6a3341): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:41.892136 kubelet[2815]: E1216 03:15:41.892101 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" podUID="e282fbeb-8283-4c6d-8776-2743ee6a3341" Dec 16 03:15:42.070095 systemd-networkd[1498]: calidf846fca254: Gained IPv6LL Dec 16 03:15:42.141865 kubelet[2815]: I1216 03:15:42.141390 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:15:42.142003 kubelet[2815]: E1216 03:15:42.141868 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:42.166737 containerd[1591]: time="2025-12-16T03:15:42.166683954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66776c65-kfd2d,Uid:3fd5d06e-1026-4a73-b6dc-d83cdf5f2977,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:15:42.166877 containerd[1591]: time="2025-12-16T03:15:42.166817440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jz64m,Uid:2f0d2d2c-6593-4c7a-9cdf-35214b834c16,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:42.461360 systemd-networkd[1498]: cali384eae8a747: Link UP Dec 16 03:15:42.463022 systemd-networkd[1498]: cali384eae8a747: Gained carrier Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.252 [INFO][4635] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.277 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0 calico-apiserver-b66776c65- calico-apiserver 3fd5d06e-1026-4a73-b6dc-d83cdf5f2977 893 0 2025-12-16 03:15:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b66776c65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-7-1189c174c4 calico-apiserver-b66776c65-kfd2d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali384eae8a747 [] [] }} ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-kfd2d" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.279 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-kfd2d" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.379 [INFO][4672] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" HandleID="k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.379 [INFO][4672] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" HandleID="k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-7-1189c174c4", "pod":"calico-apiserver-b66776c65-kfd2d", "timestamp":"2025-12-16 03:15:42.379094481 +0000 UTC"}, Hostname:"ci-4547.0.0-7-1189c174c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.379 [INFO][4672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.379 [INFO][4672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.379 [INFO][4672] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-7-1189c174c4' Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.398 [INFO][4672] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.405 [INFO][4672] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.411 [INFO][4672] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.414 [INFO][4672] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.418 [INFO][4672] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.418 [INFO][4672] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.420 [INFO][4672] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316 Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.427 [INFO][4672] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.439 [INFO][4672] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.71/26] block=192.168.67.64/26 handle="k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.440 [INFO][4672] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.71/26] handle="k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.440 [INFO][4672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:42.492869 containerd[1591]: 2025-12-16 03:15:42.440 [INFO][4672] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.71/26] IPv6=[] ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" HandleID="k8s-pod-network.650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Workload="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" Dec 16 03:15:42.496685 containerd[1591]: 2025-12-16 03:15:42.446 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-kfd2d" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0", GenerateName:"calico-apiserver-b66776c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"3fd5d06e-1026-4a73-b6dc-d83cdf5f2977", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b66776c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"", Pod:"calico-apiserver-b66776c65-kfd2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali384eae8a747", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:42.496685 containerd[1591]: 2025-12-16 03:15:42.446 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.71/32] ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-kfd2d" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" Dec 16 03:15:42.496685 containerd[1591]: 2025-12-16 03:15:42.446 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali384eae8a747 ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-kfd2d" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" Dec 16 03:15:42.496685 containerd[1591]: 2025-12-16 03:15:42.472 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-kfd2d" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" Dec 16 03:15:42.496685 containerd[1591]: 2025-12-16 03:15:42.472 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-kfd2d" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0", GenerateName:"calico-apiserver-b66776c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"3fd5d06e-1026-4a73-b6dc-d83cdf5f2977", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b66776c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316", Pod:"calico-apiserver-b66776c65-kfd2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali384eae8a747", MAC:"fa:47:e4:c0:8c:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:42.496685 containerd[1591]: 2025-12-16 03:15:42.486 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" Namespace="calico-apiserver" Pod="calico-apiserver-b66776c65-kfd2d" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-calico--apiserver--b66776c65--kfd2d-eth0" Dec 16 03:15:42.543633 containerd[1591]: time="2025-12-16T03:15:42.542969219Z" level=info msg="connecting to shim 650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316" address="unix:///run/containerd/s/8f57ebb07f30fb3a0d663621666cd120f8348fc29efdf90161ffe0c1d3b4f88d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:42.580022 kubelet[2815]: E1216 03:15:42.579878 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:42.583058 kubelet[2815]: E1216 03:15:42.582953 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:15:42.584114 kubelet[2815]: E1216 03:15:42.584021 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" podUID="e282fbeb-8283-4c6d-8776-2743ee6a3341" Dec 16 03:15:42.602197 systemd-networkd[1498]: cali2c591085a2e: Link UP Dec 16 03:15:42.605568 systemd-networkd[1498]: cali2c591085a2e: Gained carrier Dec 16 03:15:42.647100 systemd[1]: Started cri-containerd-650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316.scope - libcontainer container 650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316. Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.246 [INFO][4643] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.271 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0 csi-node-driver- calico-system 2f0d2d2c-6593-4c7a-9cdf-35214b834c16 773 0 2025-12-16 03:15:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-7-1189c174c4 csi-node-driver-jz64m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2c591085a2e [] [] }} ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Namespace="calico-system" Pod="csi-node-driver-jz64m" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.271 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Namespace="calico-system" Pod="csi-node-driver-jz64m" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.394 [INFO][4665] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" HandleID="k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Workload="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.395 [INFO][4665] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" HandleID="k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Workload="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001237a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-7-1189c174c4", "pod":"csi-node-driver-jz64m", "timestamp":"2025-12-16 03:15:42.394728864 +0000 UTC"}, Hostname:"ci-4547.0.0-7-1189c174c4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.395 [INFO][4665] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.440 [INFO][4665] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.440 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-7-1189c174c4' Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.498 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.511 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.527 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.534 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.539 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.539 [INFO][4665] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.543 [INFO][4665] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571 Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.554 [INFO][4665] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.576 [INFO][4665] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.72/26] block=192.168.67.64/26 handle="k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.578 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.72/26] handle="k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" host="ci-4547.0.0-7-1189c174c4" Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.581 [INFO][4665] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:42.656371 containerd[1591]: 2025-12-16 03:15:42.581 [INFO][4665] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.72/26] IPv6=[] ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" HandleID="k8s-pod-network.8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Workload="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" Dec 16 03:15:42.655000 audit[4734]: NETFILTER_CFG table=filter:129 family=2 entries=15 op=nft_register_rule pid=4734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:42.655000 audit[4734]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffef83246e0 a2=0 a3=7ffef83246cc items=0 ppid=2938 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:42.659604 containerd[1591]: 2025-12-16 03:15:42.588 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Namespace="calico-system" Pod="csi-node-driver-jz64m" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f0d2d2c-6593-4c7a-9cdf-35214b834c16", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"", Pod:"csi-node-driver-jz64m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c591085a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:42.659604 containerd[1591]: 2025-12-16 03:15:42.588 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.72/32] ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Namespace="calico-system" Pod="csi-node-driver-jz64m" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" Dec 16 03:15:42.659604 containerd[1591]: 2025-12-16 03:15:42.591 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c591085a2e ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Namespace="calico-system" Pod="csi-node-driver-jz64m" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" Dec 16 03:15:42.659604 containerd[1591]: 2025-12-16 03:15:42.614 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Namespace="calico-system" Pod="csi-node-driver-jz64m" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" Dec 16 03:15:42.659604 containerd[1591]: 2025-12-16 03:15:42.615 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Namespace="calico-system" Pod="csi-node-driver-jz64m" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2f0d2d2c-6593-4c7a-9cdf-35214b834c16", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-7-1189c174c4", ContainerID:"8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571", Pod:"csi-node-driver-jz64m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c591085a2e", MAC:"9e:dd:74:a8:8f:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:42.659604 containerd[1591]: 2025-12-16 03:15:42.645 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" Namespace="calico-system" Pod="csi-node-driver-jz64m" WorkloadEndpoint="ci--4547.0.0--7--1189c174c4-k8s-csi--node--driver--jz64m-eth0" Dec 16 03:15:42.660000 audit[4734]: NETFILTER_CFG table=nat:130 family=2 entries=25 op=nft_register_chain pid=4734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:42.660000 audit[4734]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffef83246e0 a2=0 a3=7ffef83246cc items=0 ppid=2938 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:42.703000 audit: BPF prog-id=213 op=LOAD Dec 16 03:15:42.705000 audit: BPF prog-id=214 op=LOAD Dec 16 03:15:42.705000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4705 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306266343031393366663063653863623735643961303235633964 Dec 16 03:15:42.706000 audit: BPF prog-id=214 op=UNLOAD Dec 16 03:15:42.706000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4705 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306266343031393366663063653863623735643961303235633964 Dec 16 03:15:42.706000 audit: BPF prog-id=215 op=LOAD Dec 16 03:15:42.706000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4705 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306266343031393366663063653863623735643961303235633964 Dec 16 03:15:42.706000 audit: BPF prog-id=216 op=LOAD Dec 16 03:15:42.706000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4705 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306266343031393366663063653863623735643961303235633964 Dec 16 03:15:42.706000 audit: BPF prog-id=216 op=UNLOAD Dec 16 03:15:42.706000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4705 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306266343031393366663063653863623735643961303235633964 Dec 16 03:15:42.706000 audit: BPF prog-id=215 op=UNLOAD Dec 16 03:15:42.706000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4705 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306266343031393366663063653863623735643961303235633964 Dec 16 03:15:42.706000 audit: BPF prog-id=217 op=LOAD Dec 16 03:15:42.706000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4705 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306266343031393366663063653863623735643961303235633964 Dec 16 03:15:42.721926 containerd[1591]: time="2025-12-16T03:15:42.720995546Z" level=info msg="connecting to shim 8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571" address="unix:///run/containerd/s/16666b14cf4d739d9ef12ff5bf186f708d6a4f2827dd334d9cc267712a68d68f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:42.778213 systemd[1]: Started cri-containerd-8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571.scope - libcontainer container 8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571. Dec 16 03:15:42.812000 audit: BPF prog-id=218 op=LOAD Dec 16 03:15:42.812000 audit: BPF prog-id=219 op=LOAD Dec 16 03:15:42.812000 audit[4771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4761 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861623262343637306165313764323363653535313838376161346365 Dec 16 03:15:42.813000 audit: BPF prog-id=219 op=UNLOAD Dec 16 03:15:42.813000 audit[4771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861623262343637306165313764323363653535313838376161346365 Dec 16 03:15:42.814000 audit: BPF prog-id=220 op=LOAD Dec 16 03:15:42.814000 audit[4771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4761 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861623262343637306165313764323363653535313838376161346365 Dec 16 03:15:42.814000 audit: BPF prog-id=221 op=LOAD Dec 16 03:15:42.814000 audit[4771]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4761 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861623262343637306165313764323363653535313838376161346365 Dec 16 03:15:42.814000 audit: BPF prog-id=221 op=UNLOAD Dec 16 03:15:42.814000 audit[4771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861623262343637306165313764323363653535313838376161346365 Dec 16 03:15:42.814000 audit: BPF prog-id=220 op=UNLOAD Dec 16 03:15:42.814000 audit[4771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861623262343637306165313764323363653535313838376161346365 Dec 16 03:15:42.814000 audit: BPF prog-id=222 op=LOAD Dec 16 03:15:42.814000 audit[4771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4761 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:42.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861623262343637306165313764323363653535313838376161346365 Dec 16 03:15:42.825891 containerd[1591]: time="2025-12-16T03:15:42.825847922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b66776c65-kfd2d,Uid:3fd5d06e-1026-4a73-b6dc-d83cdf5f2977,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"650bf40193ff0ce8cb75d9a025c9d313175a9b7e162dd8dc02a80a280fa00316\"" Dec 16 03:15:42.829423 containerd[1591]: time="2025-12-16T03:15:42.829381011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:15:42.853315 containerd[1591]: time="2025-12-16T03:15:42.853270418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jz64m,Uid:2f0d2d2c-6593-4c7a-9cdf-35214b834c16,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ab2b4670ae17d23ce551887aa4ce6c438232a2cfd58062f3501758afe651571\"" Dec 16 03:15:43.157292 containerd[1591]: time="2025-12-16T03:15:43.157230159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:43.159835 containerd[1591]: time="2025-12-16T03:15:43.159760654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:15:43.160013 containerd[1591]: time="2025-12-16T03:15:43.159885156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:43.160671 kubelet[2815]: E1216 03:15:43.160561 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:43.160671 kubelet[2815]: E1216 03:15:43.160631 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:43.161837 kubelet[2815]: E1216 03:15:43.160952 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw7kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b66776c65-kfd2d_calico-apiserver(3fd5d06e-1026-4a73-b6dc-d83cdf5f2977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:43.162114 containerd[1591]: time="2025-12-16T03:15:43.161358107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:15:43.162683 kubelet[2815]: E1216 03:15:43.162552 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" podUID="3fd5d06e-1026-4a73-b6dc-d83cdf5f2977" Dec 16 03:15:43.199000 audit: BPF prog-id=223 op=LOAD Dec 16 03:15:43.199000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff94b72b90 a2=98 a3=1fffffffffffffff items=0 ppid=4817 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.199000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:43.199000 audit: BPF prog-id=223 op=UNLOAD Dec 16 03:15:43.199000 audit[4836]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff94b72b60 a3=0 items=0 ppid=4817 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.199000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:43.199000 audit: BPF prog-id=224 op=LOAD Dec 16 03:15:43.199000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff94b72a70 a2=94 a3=3 items=0 ppid=4817 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.199000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:43.199000 audit: BPF prog-id=224 op=UNLOAD Dec 16 03:15:43.199000 audit[4836]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff94b72a70 a2=94 a3=3 items=0 ppid=4817 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.199000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:43.199000 audit: BPF prog-id=225 op=LOAD Dec 16 03:15:43.199000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff94b72ab0 a2=94 a3=7fff94b72c90 items=0 ppid=4817 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.199000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:43.199000 audit: BPF prog-id=225 op=UNLOAD Dec 16 03:15:43.199000 audit[4836]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff94b72ab0 a2=94 a3=7fff94b72c90 items=0 ppid=4817 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.199000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:43.205000 audit: BPF prog-id=226 op=LOAD Dec 16 03:15:43.205000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6dec53b0 a2=98 a3=3 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.205000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.205000 audit: BPF prog-id=226 op=UNLOAD Dec 16 03:15:43.205000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6dec5380 a3=0 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.205000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.206000 audit: BPF prog-id=227 op=LOAD Dec 16 03:15:43.206000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6dec51a0 a2=94 a3=54428f items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.206000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.206000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:15:43.206000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6dec51a0 a2=94 a3=54428f items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.206000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.206000 audit: BPF prog-id=228 op=LOAD Dec 16 03:15:43.206000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6dec51d0 a2=94 a3=2 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.206000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.206000 audit: BPF prog-id=228 op=UNLOAD Dec 16 03:15:43.206000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6dec51d0 a2=0 a3=2 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.206000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.350025 systemd-networkd[1498]: cali3c90b728eda: Gained IPv6LL Dec 16 03:15:43.463000 audit: BPF prog-id=229 op=LOAD Dec 16 03:15:43.463000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6dec5090 a2=94 a3=1 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.463000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.464000 audit: BPF prog-id=229 op=UNLOAD Dec 16 03:15:43.464000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6dec5090 a2=94 a3=1 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.464000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.476083 containerd[1591]: time="2025-12-16T03:15:43.475845151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:43.477572 containerd[1591]: time="2025-12-16T03:15:43.477097218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:15:43.477572 containerd[1591]: time="2025-12-16T03:15:43.477290787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:43.477747 kubelet[2815]: E1216 03:15:43.477480 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:15:43.477747 kubelet[2815]: E1216 03:15:43.477532 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:15:43.479809 kubelet[2815]: E1216 03:15:43.479002 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59mz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jz64m_calico-system(2f0d2d2c-6593-4c7a-9cdf-35214b834c16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:43.478000 audit: BPF prog-id=230 op=LOAD Dec 16 03:15:43.478000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6dec5080 a2=94 a3=4 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.478000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.478000 audit: BPF prog-id=230 op=UNLOAD Dec 16 03:15:43.478000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6dec5080 a2=0 a3=4 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.478000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.478000 audit: BPF prog-id=231 op=LOAD Dec 16 03:15:43.478000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6dec4ee0 a2=94 a3=5 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.478000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.479000 audit: BPF prog-id=231 op=UNLOAD Dec 16 03:15:43.479000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6dec4ee0 a2=0 a3=5 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.479000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.479000 audit: BPF prog-id=232 op=LOAD Dec 16 03:15:43.479000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6dec5100 a2=94 a3=6 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.479000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.479000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:15:43.479000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6dec5100 a2=0 a3=6 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.479000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.479000 audit: BPF prog-id=233 op=LOAD Dec 16 03:15:43.479000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6dec48b0 a2=94 a3=88 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.479000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.479000 audit: BPF prog-id=234 op=LOAD Dec 16 03:15:43.479000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff6dec4730 a2=94 a3=2 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.479000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.479000 audit: BPF prog-id=234 op=UNLOAD Dec 16 03:15:43.479000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff6dec4760 a2=0 a3=7fff6dec4860 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.479000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.480000 audit: BPF prog-id=233 op=UNLOAD Dec 16 03:15:43.480000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3ef78d10 a2=0 a3=2efab4af33ec7207 items=0 ppid=4817 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.480000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:43.485378 containerd[1591]: time="2025-12-16T03:15:43.484812200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:15:43.516000 audit: BPF prog-id=235 op=LOAD Dec 16 03:15:43.516000 audit[4840]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff4fed6b00 a2=98 a3=1999999999999999 items=0 ppid=4817 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:43.516000 audit: BPF prog-id=235 op=UNLOAD Dec 16 03:15:43.516000 audit[4840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff4fed6ad0 a3=0 items=0 ppid=4817 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:43.516000 audit: BPF prog-id=236 op=LOAD Dec 16 03:15:43.516000 audit[4840]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff4fed69e0 a2=94 a3=ffff items=0 ppid=4817 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:43.516000 audit: BPF prog-id=236 op=UNLOAD Dec 16 03:15:43.516000 audit[4840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff4fed69e0 a2=94 a3=ffff items=0 ppid=4817 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:43.516000 audit: BPF prog-id=237 op=LOAD Dec 16 03:15:43.516000 audit[4840]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff4fed6a20 a2=94 a3=7fff4fed6c00 items=0 ppid=4817 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:43.516000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:15:43.516000 audit[4840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff4fed6a20 a2=94 a3=7fff4fed6c00 items=0 ppid=4817 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.516000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:43.600454 kubelet[2815]: E1216 03:15:43.600080 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" podUID="3fd5d06e-1026-4a73-b6dc-d83cdf5f2977" Dec 16 03:15:43.673000 audit[4854]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=4854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:43.673000 audit[4854]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd902f1180 a2=0 a3=7ffd902f116c items=0 ppid=2938 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:43.686641 systemd-networkd[1498]: vxlan.calico: Link UP Dec 16 03:15:43.686653 systemd-networkd[1498]: vxlan.calico: Gained carrier Dec 16 03:15:43.694000 audit[4854]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=4854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:43.694000 audit[4854]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd902f1180 a2=0 a3=7ffd902f116c items=0 ppid=2938 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:43.748000 audit: BPF prog-id=238 op=LOAD Dec 16 03:15:43.748000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb1004e90 a2=98 a3=0 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.748000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.748000 audit: BPF prog-id=238 op=UNLOAD Dec 16 03:15:43.748000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeb1004e60 a3=0 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.748000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.751000 audit: BPF prog-id=239 op=LOAD Dec 16 03:15:43.751000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb1004ca0 a2=94 a3=54428f items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.751000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.754000 audit: BPF prog-id=239 op=UNLOAD Dec 16 03:15:43.754000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb1004ca0 a2=94 a3=54428f items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.754000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.754000 audit: BPF prog-id=240 op=LOAD Dec 16 03:15:43.754000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb1004cd0 a2=94 a3=2 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.754000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.754000 audit: BPF prog-id=240 op=UNLOAD Dec 16 03:15:43.754000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb1004cd0 a2=0 a3=2 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.754000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.754000 audit: BPF prog-id=241 op=LOAD Dec 16 03:15:43.754000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb1004a80 a2=94 a3=4 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.754000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.754000 audit: BPF prog-id=241 op=UNLOAD Dec 16 03:15:43.754000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb1004a80 a2=94 a3=4 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.754000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.754000 audit: BPF prog-id=242 op=LOAD Dec 16 03:15:43.754000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb1004b80 a2=94 a3=7ffeb1004d00 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.754000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.754000 audit: BPF prog-id=242 op=UNLOAD Dec 16 03:15:43.754000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb1004b80 a2=0 a3=7ffeb1004d00 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.754000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.760000 audit: BPF prog-id=243 op=LOAD Dec 16 03:15:43.760000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb10042b0 a2=94 a3=2 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.760000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.761000 audit: BPF prog-id=243 op=UNLOAD Dec 16 03:15:43.761000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb10042b0 a2=0 a3=2 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.761000 audit: BPF prog-id=244 op=LOAD Dec 16 03:15:43.761000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb10043b0 a2=94 a3=30 items=0 ppid=4817 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:43.783000 audit: BPF prog-id=245 op=LOAD Dec 16 03:15:43.783000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd33e7cb20 a2=98 a3=0 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.783000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:43.784000 audit: BPF prog-id=245 op=UNLOAD Dec 16 03:15:43.784000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd33e7caf0 a3=0 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.784000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:43.784000 audit: BPF prog-id=246 op=LOAD Dec 16 03:15:43.784000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd33e7c910 a2=94 a3=54428f items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.784000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:43.784000 audit: BPF prog-id=246 op=UNLOAD Dec 16 03:15:43.784000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd33e7c910 a2=94 a3=54428f items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.784000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:43.784000 audit: BPF prog-id=247 op=LOAD Dec 16 03:15:43.784000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd33e7c940 a2=94 a3=2 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.784000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:43.784000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:15:43.784000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd33e7c940 a2=0 a3=2 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.784000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:43.864823 containerd[1591]: time="2025-12-16T03:15:43.864759193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:43.865633 containerd[1591]: time="2025-12-16T03:15:43.865584087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:15:43.865771 containerd[1591]: time="2025-12-16T03:15:43.865681457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:43.866168 kubelet[2815]: E1216 03:15:43.866046 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:15:43.866168 kubelet[2815]: E1216 03:15:43.866130 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:15:43.866528 kubelet[2815]: E1216 03:15:43.866487 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59mz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jz64m_calico-system(2f0d2d2c-6593-4c7a-9cdf-35214b834c16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:43.868817 kubelet[2815]: E1216 03:15:43.868740 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:44.076000 audit: BPF prog-id=248 op=LOAD Dec 16 03:15:44.076000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd33e7c800 a2=94 a3=1 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.076000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.076000 audit: BPF prog-id=248 op=UNLOAD Dec 16 03:15:44.076000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd33e7c800 a2=94 a3=1 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.076000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.093000 audit: BPF prog-id=249 op=LOAD Dec 16 03:15:44.093000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd33e7c7f0 a2=94 a3=4 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.093000 audit: BPF prog-id=249 op=UNLOAD Dec 16 03:15:44.093000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd33e7c7f0 a2=0 a3=4 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.094000 audit: BPF prog-id=250 op=LOAD Dec 16 03:15:44.094000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd33e7c650 a2=94 a3=5 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.094000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.094000 audit: BPF prog-id=250 op=UNLOAD Dec 16 03:15:44.094000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd33e7c650 a2=0 a3=5 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.094000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.094000 audit: BPF prog-id=251 op=LOAD Dec 16 03:15:44.094000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd33e7c870 a2=94 a3=6 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.094000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.094000 audit: BPF prog-id=251 op=UNLOAD Dec 16 03:15:44.094000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd33e7c870 a2=0 a3=6 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.094000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.094000 audit: BPF prog-id=252 op=LOAD Dec 16 03:15:44.094000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd33e7c020 a2=94 a3=88 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.094000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.095000 audit: BPF prog-id=253 op=LOAD Dec 16 03:15:44.095000 audit[4870]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd33e7bea0 a2=94 a3=2 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.095000 audit: BPF prog-id=253 op=UNLOAD Dec 16 03:15:44.095000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd33e7bed0 a2=0 a3=7ffd33e7bfd0 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.095000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:15:44.095000 audit[4870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=30706d10 a2=0 a3=b99d525a1910a616 items=0 ppid=4817 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:44.107000 audit: BPF prog-id=244 op=UNLOAD Dec 16 03:15:44.107000 audit[4817]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00124a100 a2=0 a3=0 items=0 ppid=3962 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.107000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:15:44.250000 audit[4900]: NETFILTER_CFG table=nat:133 family=2 entries=15 op=nft_register_chain pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:44.250000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd130b17c0 a2=0 a3=7ffd130b17ac items=0 ppid=4817 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.250000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:44.257000 audit[4906]: NETFILTER_CFG table=mangle:134 family=2 entries=16 op=nft_register_chain pid=4906 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:44.257000 audit[4906]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdc2c11810 a2=0 a3=7ffdc2c117fc items=0 ppid=4817 pid=4906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.257000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:44.260000 audit[4902]: NETFILTER_CFG table=raw:135 family=2 entries=21 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:44.260000 audit[4902]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff562c0a60 a2=0 a3=7fff562c0a4c items=0 ppid=4817 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.260000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:44.273000 audit[4904]: NETFILTER_CFG table=filter:136 family=2 entries=321 op=nft_register_chain pid=4904 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:44.273000 audit[4904]: SYSCALL arch=c000003e syscall=46 success=yes exit=190616 a0=3 a1=7ffe8cad9220 a2=0 a3=7ffe8cad920c items=0 ppid=4817 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.273000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:44.310471 systemd-networkd[1498]: cali2c591085a2e: Gained IPv6LL Dec 16 03:15:44.439398 systemd-networkd[1498]: cali384eae8a747: Gained IPv6LL Dec 16 03:15:44.602454 kubelet[2815]: E1216 03:15:44.602390 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" podUID="3fd5d06e-1026-4a73-b6dc-d83cdf5f2977" Dec 16 03:15:44.603696 kubelet[2815]: E1216 03:15:44.603656 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:45.208062 systemd-networkd[1498]: vxlan.calico: Gained IPv6LL Dec 16 03:15:48.169294 systemd[1]: Started sshd@9-146.190.151.166:22-147.75.109.163:54158.service - OpenSSH per-connection server daemon (147.75.109.163:54158). Dec 16 03:15:48.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-146.190.151.166:22-147.75.109.163:54158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:48.170279 kernel: kauditd_printk_skb: 300 callbacks suppressed Dec 16 03:15:48.170324 kernel: audit: type=1130 audit(1765854948.168:736): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-146.190.151.166:22-147.75.109.163:54158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:48.322000 audit[4918]: USER_ACCT pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.324478 sshd[4918]: Accepted publickey for core from 147.75.109.163 port 54158 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:15:48.326455 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:48.323000 audit[4918]: CRED_ACQ pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.327821 kernel: audit: type=1101 audit(1765854948.322:737): pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.327887 kernel: audit: type=1103 audit(1765854948.323:738): pid=4918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.332067 kernel: audit: type=1006 audit(1765854948.323:739): pid=4918 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 03:15:48.333901 kernel: audit: type=1300 audit(1765854948.323:739): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2d8a5fa0 a2=3 a3=0 items=0 ppid=1 pid=4918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:48.323000 audit[4918]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2d8a5fa0 a2=3 a3=0 items=0 ppid=1 pid=4918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:48.323000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:48.339969 kernel: audit: type=1327 audit(1765854948.323:739): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:48.345020 systemd-logind[1566]: New session 11 of user core. Dec 16 03:15:48.348161 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:15:48.353000 audit[4918]: USER_START pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.358837 kernel: audit: type=1105 audit(1765854948.353:740): pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.359227 kernel: audit: type=1103 audit(1765854948.356:741): pid=4922 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.356000 audit[4922]: CRED_ACQ pid=4922 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.904913 sshd[4922]: Connection closed by 147.75.109.163 port 54158 Dec 16 03:15:48.905953 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:48.909000 audit[4918]: USER_END pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.916875 kernel: audit: type=1106 audit(1765854948.909:742): pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.916976 systemd[1]: sshd@9-146.190.151.166:22-147.75.109.163:54158.service: Deactivated successfully. Dec 16 03:15:48.910000 audit[4918]: CRED_DISP pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.920009 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:15:48.922687 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:15:48.922895 kernel: audit: type=1104 audit(1765854948.910:743): pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:48.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-146.190.151.166:22-147.75.109.163:54158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:48.926490 systemd-logind[1566]: Removed session 11. Dec 16 03:15:52.167408 containerd[1591]: time="2025-12-16T03:15:52.167365439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:15:52.468503 containerd[1591]: time="2025-12-16T03:15:52.468274426Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:52.469335 containerd[1591]: time="2025-12-16T03:15:52.469271571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:15:52.469456 containerd[1591]: time="2025-12-16T03:15:52.469411984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:52.469800 kubelet[2815]: E1216 03:15:52.469735 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:52.470340 kubelet[2815]: E1216 03:15:52.469838 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:52.470340 kubelet[2815]: E1216 03:15:52.470028 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4sbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b66776c65-79frt_calico-apiserver(a48216b5-38ee-49b5-8de6-29119905fab4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:52.476619 kubelet[2815]: E1216 03:15:52.471814 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:15:53.170991 containerd[1591]: time="2025-12-16T03:15:53.170845676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:15:53.512566 containerd[1591]: time="2025-12-16T03:15:53.512228318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:53.513139 containerd[1591]: time="2025-12-16T03:15:53.513090306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:15:53.513232 containerd[1591]: time="2025-12-16T03:15:53.513203615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:53.513404 kubelet[2815]: E1216 03:15:53.513366 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:15:53.513898 kubelet[2815]: E1216 03:15:53.513421 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:15:53.513898 kubelet[2815]: E1216 03:15:53.513573 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8h2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7944db5ff7-jqshs_calico-system(e282fbeb-8283-4c6d-8776-2743ee6a3341): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:53.515093 kubelet[2815]: E1216 03:15:53.514998 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" podUID="e282fbeb-8283-4c6d-8776-2743ee6a3341" Dec 16 03:15:53.932342 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:15:53.932496 kernel: audit: type=1130 audit(1765854953.929:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-146.190.151.166:22-147.75.109.163:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:53.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-146.190.151.166:22-147.75.109.163:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:53.930612 systemd[1]: Started sshd@10-146.190.151.166:22-147.75.109.163:55390.service - OpenSSH per-connection server daemon (147.75.109.163:55390). Dec 16 03:15:54.016000 audit[4942]: USER_ACCT pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.018295 sshd[4942]: Accepted publickey for core from 147.75.109.163 port 55390 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:15:54.020000 audit[4942]: CRED_ACQ pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.022754 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:54.023528 kernel: audit: type=1101 audit(1765854954.016:746): pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.023590 kernel: audit: type=1103 audit(1765854954.020:747): pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.020000 audit[4942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd96f1e1b0 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:54.030852 kernel: audit: type=1006 audit(1765854954.020:748): pid=4942 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 03:15:54.030961 kernel: audit: type=1300 audit(1765854954.020:748): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd96f1e1b0 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:54.020000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:54.034367 kernel: audit: type=1327 audit(1765854954.020:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:54.038505 systemd-logind[1566]: New session 12 of user core. Dec 16 03:15:54.044167 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:15:54.047000 audit[4942]: USER_START pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.052000 audit[4946]: CRED_ACQ pid=4946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.058538 kernel: audit: type=1105 audit(1765854954.047:749): pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.058664 kernel: audit: type=1103 audit(1765854954.052:750): pid=4946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.184909 containerd[1591]: time="2025-12-16T03:15:54.183541149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:15:54.227403 sshd[4946]: Connection closed by 147.75.109.163 port 55390 Dec 16 03:15:54.228282 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:54.238824 kernel: audit: type=1106 audit(1765854954.230:751): pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.230000 audit[4942]: USER_END pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.236385 systemd[1]: sshd@10-146.190.151.166:22-147.75.109.163:55390.service: Deactivated successfully. Dec 16 03:15:54.230000 audit[4942]: CRED_DISP pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.243379 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:15:54.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-146.190.151.166:22-147.75.109.163:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:54.245839 kernel: audit: type=1104 audit(1765854954.230:752): pid=4942 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:54.248106 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:15:54.251192 systemd-logind[1566]: Removed session 12. Dec 16 03:15:54.530238 containerd[1591]: time="2025-12-16T03:15:54.529992250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:54.532427 containerd[1591]: time="2025-12-16T03:15:54.532301278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:54.532890 containerd[1591]: time="2025-12-16T03:15:54.532745944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:15:54.533623 kubelet[2815]: E1216 03:15:54.533402 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:15:54.533623 kubelet[2815]: E1216 03:15:54.533580 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:15:54.535150 kubelet[2815]: E1216 03:15:54.534807 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4b735df0f3c44edda470dc6f58182ab2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qz29d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c76c6f97-2ccvf_calico-system(4ef57fb4-ee02-432e-a890-17cb5351bf0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:54.538087 containerd[1591]: time="2025-12-16T03:15:54.537989672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:15:54.855505 containerd[1591]: time="2025-12-16T03:15:54.855429040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:54.856394 containerd[1591]: time="2025-12-16T03:15:54.856280449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:15:54.856394 containerd[1591]: time="2025-12-16T03:15:54.856340777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:54.856748 kubelet[2815]: E1216 03:15:54.856702 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:15:54.857483 kubelet[2815]: E1216 03:15:54.856764 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:15:54.857483 kubelet[2815]: E1216 03:15:54.856965 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz29d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c76c6f97-2ccvf_calico-system(4ef57fb4-ee02-432e-a890-17cb5351bf0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:54.858979 kubelet[2815]: E1216 03:15:54.858927 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c76c6f97-2ccvf" podUID="4ef57fb4-ee02-432e-a890-17cb5351bf0b" Dec 16 03:15:55.167490 containerd[1591]: time="2025-12-16T03:15:55.167260717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:15:55.465452 containerd[1591]: time="2025-12-16T03:15:55.465151851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:55.466505 containerd[1591]: time="2025-12-16T03:15:55.466047513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:15:55.466505 containerd[1591]: time="2025-12-16T03:15:55.466115392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:55.466964 kubelet[2815]: E1216 03:15:55.466871 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:15:55.466964 kubelet[2815]: E1216 03:15:55.466942 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:15:55.468545 kubelet[2815]: E1216 03:15:55.467733 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9g7b7_calico-system(b1b86abd-1663-4902-987a-286c262c84b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:55.470091 kubelet[2815]: E1216 03:15:55.470032 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9g7b7" podUID="b1b86abd-1663-4902-987a-286c262c84b0" Dec 16 03:15:56.169555 containerd[1591]: time="2025-12-16T03:15:56.169110189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:15:56.480731 containerd[1591]: time="2025-12-16T03:15:56.480590844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:56.481600 containerd[1591]: time="2025-12-16T03:15:56.481557728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:15:56.481711 containerd[1591]: time="2025-12-16T03:15:56.481655603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:56.481963 kubelet[2815]: E1216 03:15:56.481924 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:15:56.482279 kubelet[2815]: E1216 03:15:56.481981 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:15:56.482279 kubelet[2815]: E1216 03:15:56.482115 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59mz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jz64m_calico-system(2f0d2d2c-6593-4c7a-9cdf-35214b834c16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:56.485016 containerd[1591]: time="2025-12-16T03:15:56.484959037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:15:56.786276 containerd[1591]: time="2025-12-16T03:15:56.786092692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:56.787410 containerd[1591]: time="2025-12-16T03:15:56.787264667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:15:56.787410 containerd[1591]: time="2025-12-16T03:15:56.787376456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:56.788011 kubelet[2815]: E1216 03:15:56.787892 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:15:56.788011 kubelet[2815]: E1216 03:15:56.787979 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:15:56.788568 kubelet[2815]: E1216 03:15:56.788489 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59mz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jz64m_calico-system(2f0d2d2c-6593-4c7a-9cdf-35214b834c16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:56.790121 kubelet[2815]: E1216 03:15:56.790051 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:15:58.167576 containerd[1591]: time="2025-12-16T03:15:58.167279720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:15:58.513071 containerd[1591]: time="2025-12-16T03:15:58.512880026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:58.514201 containerd[1591]: time="2025-12-16T03:15:58.514062207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:15:58.514201 containerd[1591]: time="2025-12-16T03:15:58.514164334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:58.514610 kubelet[2815]: E1216 03:15:58.514494 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:58.514610 kubelet[2815]: E1216 03:15:58.514549 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:58.515843 kubelet[2815]: E1216 03:15:58.515469 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw7kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b66776c65-kfd2d_calico-apiserver(3fd5d06e-1026-4a73-b6dc-d83cdf5f2977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:58.516728 kubelet[2815]: E1216 03:15:58.516666 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" podUID="3fd5d06e-1026-4a73-b6dc-d83cdf5f2977" Dec 16 03:15:59.246441 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:15:59.246578 kernel: audit: type=1130 audit(1765854959.243:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-146.190.151.166:22-147.75.109.163:55400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:59.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-146.190.151.166:22-147.75.109.163:55400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:59.244973 systemd[1]: Started sshd@11-146.190.151.166:22-147.75.109.163:55400.service - OpenSSH per-connection server daemon (147.75.109.163:55400). Dec 16 03:15:59.326000 audit[4970]: USER_ACCT pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.329016 sshd[4970]: Accepted publickey for core from 147.75.109.163 port 55400 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:15:59.330000 audit[4970]: CRED_ACQ pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.334126 kernel: audit: type=1101 audit(1765854959.326:755): pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.334222 kernel: audit: type=1103 audit(1765854959.330:756): pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.334302 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:59.337436 kernel: audit: type=1006 audit(1765854959.330:757): pid=4970 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 03:15:59.330000 audit[4970]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc862b4300 a2=3 a3=0 items=0 ppid=1 pid=4970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:59.342265 systemd-logind[1566]: New session 13 of user core. Dec 16 03:15:59.330000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:59.345688 kernel: audit: type=1300 audit(1765854959.330:757): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc862b4300 a2=3 a3=0 items=0 ppid=1 pid=4970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:59.345734 kernel: audit: type=1327 audit(1765854959.330:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:59.350133 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:15:59.353000 audit[4970]: USER_START pid=4970 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.360892 kernel: audit: type=1105 audit(1765854959.353:758): pid=4970 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.359000 audit[4975]: CRED_ACQ pid=4975 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.365869 kernel: audit: type=1103 audit(1765854959.359:759): pid=4975 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.469487 sshd[4975]: Connection closed by 147.75.109.163 port 55400 Dec 16 03:15:59.470058 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:59.473000 audit[4970]: USER_END pid=4970 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.480828 kernel: audit: type=1106 audit(1765854959.473:760): pid=4970 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.473000 audit[4970]: CRED_DISP pid=4970 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.486715 systemd[1]: sshd@11-146.190.151.166:22-147.75.109.163:55400.service: Deactivated successfully. Dec 16 03:15:59.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-146.190.151.166:22-147.75.109.163:55400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:59.487887 kernel: audit: type=1104 audit(1765854959.473:761): pid=4970 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.490202 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:15:59.493580 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:15:59.500384 systemd[1]: Started sshd@12-146.190.151.166:22-147.75.109.163:55412.service - OpenSSH per-connection server daemon (147.75.109.163:55412). Dec 16 03:15:59.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-146.190.151.166:22-147.75.109.163:55412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:59.502753 systemd-logind[1566]: Removed session 13. Dec 16 03:15:59.592000 audit[4989]: USER_ACCT pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.594139 sshd[4989]: Accepted publickey for core from 147.75.109.163 port 55412 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:15:59.593000 audit[4989]: CRED_ACQ pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.593000 audit[4989]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff11939e70 a2=3 a3=0 items=0 ppid=1 pid=4989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:59.593000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:59.596361 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:59.603512 systemd-logind[1566]: New session 14 of user core. Dec 16 03:15:59.611157 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:15:59.614000 audit[4989]: USER_START pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.616000 audit[4993]: CRED_ACQ pid=4993 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.790936 sshd[4993]: Connection closed by 147.75.109.163 port 55412 Dec 16 03:15:59.792188 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:59.792000 audit[4989]: USER_END pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.794000 audit[4989]: CRED_DISP pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.808323 systemd[1]: sshd@12-146.190.151.166:22-147.75.109.163:55412.service: Deactivated successfully. Dec 16 03:15:59.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-146.190.151.166:22-147.75.109.163:55412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:59.811609 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:15:59.813257 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:15:59.820615 systemd[1]: Started sshd@13-146.190.151.166:22-147.75.109.163:55414.service - OpenSSH per-connection server daemon (147.75.109.163:55414). Dec 16 03:15:59.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-146.190.151.166:22-147.75.109.163:55414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:59.824116 systemd-logind[1566]: Removed session 14. Dec 16 03:15:59.923000 audit[5003]: USER_ACCT pid=5003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.925278 sshd[5003]: Accepted publickey for core from 147.75.109.163 port 55414 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:15:59.924000 audit[5003]: CRED_ACQ pid=5003 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.924000 audit[5003]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb8fb6b70 a2=3 a3=0 items=0 ppid=1 pid=5003 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:59.924000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:59.927346 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:59.934225 systemd-logind[1566]: New session 15 of user core. Dec 16 03:15:59.939156 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:15:59.943000 audit[5003]: USER_START pid=5003 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:15:59.948000 audit[5007]: CRED_ACQ pid=5007 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:00.110734 sshd[5007]: Connection closed by 147.75.109.163 port 55414 Dec 16 03:16:00.112060 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:00.113000 audit[5003]: USER_END pid=5003 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:00.113000 audit[5003]: CRED_DISP pid=5003 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:00.121118 systemd[1]: sshd@13-146.190.151.166:22-147.75.109.163:55414.service: Deactivated successfully. Dec 16 03:16:00.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-146.190.151.166:22-147.75.109.163:55414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:00.125723 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:16:00.128811 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:16:00.132079 systemd-logind[1566]: Removed session 15. Dec 16 03:16:05.133299 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:16:05.133421 kernel: audit: type=1130 audit(1765854965.127:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-146.190.151.166:22-147.75.109.163:41664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:05.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-146.190.151.166:22-147.75.109.163:41664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:05.128468 systemd[1]: Started sshd@14-146.190.151.166:22-147.75.109.163:41664.service - OpenSSH per-connection server daemon (147.75.109.163:41664). Dec 16 03:16:05.166008 kubelet[2815]: E1216 03:16:05.165628 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:16:05.203000 audit[5026]: USER_ACCT pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.205167 sshd[5026]: Accepted publickey for core from 147.75.109.163 port 41664 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:05.207000 audit[5026]: CRED_ACQ pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.209468 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:05.210945 kernel: audit: type=1101 audit(1765854965.203:782): pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.211017 kernel: audit: type=1103 audit(1765854965.207:783): pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.214667 kernel: audit: type=1006 audit(1765854965.207:784): pid=5026 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 03:16:05.207000 audit[5026]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbe55eb70 a2=3 a3=0 items=0 ppid=1 pid=5026 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:05.218022 kernel: audit: type=1300 audit(1765854965.207:784): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbe55eb70 a2=3 a3=0 items=0 ppid=1 pid=5026 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:05.207000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:05.222036 kernel: audit: type=1327 audit(1765854965.207:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:05.221363 systemd-logind[1566]: New session 16 of user core. Dec 16 03:16:05.231132 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:16:05.235000 audit[5026]: USER_START pid=5026 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.243863 kernel: audit: type=1105 audit(1765854965.235:785): pid=5026 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.241000 audit[5030]: CRED_ACQ pid=5030 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.249823 kernel: audit: type=1103 audit(1765854965.241:786): pid=5030 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.354821 sshd[5030]: Connection closed by 147.75.109.163 port 41664 Dec 16 03:16:05.356163 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:05.358000 audit[5026]: USER_END pid=5026 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.362711 systemd[1]: sshd@14-146.190.151.166:22-147.75.109.163:41664.service: Deactivated successfully. Dec 16 03:16:05.365816 kernel: audit: type=1106 audit(1765854965.358:787): pid=5026 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.365932 kernel: audit: type=1104 audit(1765854965.358:788): pid=5026 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.358000 audit[5026]: CRED_DISP pid=5026 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:05.368168 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:16:05.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-146.190.151.166:22-147.75.109.163:41664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:05.372599 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:16:05.375184 systemd-logind[1566]: Removed session 16. Dec 16 03:16:06.167015 kubelet[2815]: E1216 03:16:06.166841 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" podUID="e282fbeb-8283-4c6d-8776-2743ee6a3341" Dec 16 03:16:08.166244 kubelet[2815]: E1216 03:16:08.165021 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:16:09.167922 kubelet[2815]: E1216 03:16:09.166853 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c76c6f97-2ccvf" podUID="4ef57fb4-ee02-432e-a890-17cb5351bf0b" Dec 16 03:16:10.167261 kubelet[2815]: E1216 03:16:10.167200 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9g7b7" podUID="b1b86abd-1663-4902-987a-286c262c84b0" Dec 16 03:16:10.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-146.190.151.166:22-147.75.109.163:41672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:10.378275 systemd[1]: Started sshd@15-146.190.151.166:22-147.75.109.163:41672.service - OpenSSH per-connection server daemon (147.75.109.163:41672). Dec 16 03:16:10.379410 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:16:10.379462 kernel: audit: type=1130 audit(1765854970.377:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-146.190.151.166:22-147.75.109.163:41672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:10.484000 audit[5072]: USER_ACCT pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.486038 sshd[5072]: Accepted publickey for core from 147.75.109.163 port 41672 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:10.488000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.491446 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:10.492026 kernel: audit: type=1101 audit(1765854970.484:791): pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.492096 kernel: audit: type=1103 audit(1765854970.488:792): pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.488000 audit[5072]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe54071b70 a2=3 a3=0 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:10.501002 kernel: audit: type=1006 audit(1765854970.488:793): pid=5072 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 03:16:10.501145 kernel: audit: type=1300 audit(1765854970.488:793): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe54071b70 a2=3 a3=0 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:10.488000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:10.506238 kernel: audit: type=1327 audit(1765854970.488:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:10.506772 systemd-logind[1566]: New session 17 of user core. Dec 16 03:16:10.513233 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:16:10.518000 audit[5072]: USER_START pid=5072 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.524852 kernel: audit: type=1105 audit(1765854970.518:794): pid=5072 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.524985 kernel: audit: type=1103 audit(1765854970.520:795): pid=5077 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.520000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.626636 sshd[5077]: Connection closed by 147.75.109.163 port 41672 Dec 16 03:16:10.627931 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:10.631000 audit[5072]: USER_END pid=5072 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.638813 kernel: audit: type=1106 audit(1765854970.631:796): pid=5072 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.634000 audit[5072]: CRED_DISP pid=5072 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.643846 kernel: audit: type=1104 audit(1765854970.634:797): pid=5072 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:10.644589 systemd[1]: sshd@15-146.190.151.166:22-147.75.109.163:41672.service: Deactivated successfully. Dec 16 03:16:10.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-146.190.151.166:22-147.75.109.163:41672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:10.647739 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:16:10.649522 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:16:10.651583 systemd-logind[1566]: Removed session 17. Dec 16 03:16:11.169543 kubelet[2815]: E1216 03:16:11.169480 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:16:13.167023 kubelet[2815]: E1216 03:16:13.166962 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" podUID="3fd5d06e-1026-4a73-b6dc-d83cdf5f2977" Dec 16 03:16:15.648287 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:16:15.648413 kernel: audit: type=1130 audit(1765854975.646:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-146.190.151.166:22-147.75.109.163:53880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:15.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-146.190.151.166:22-147.75.109.163:53880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:15.647331 systemd[1]: Started sshd@16-146.190.151.166:22-147.75.109.163:53880.service - OpenSSH per-connection server daemon (147.75.109.163:53880). Dec 16 03:16:15.744000 audit[5092]: USER_ACCT pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.747009 sshd[5092]: Accepted publickey for core from 147.75.109.163 port 53880 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:15.750054 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:15.746000 audit[5092]: CRED_ACQ pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.752032 kernel: audit: type=1101 audit(1765854975.744:800): pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.752124 kernel: audit: type=1103 audit(1765854975.746:801): pid=5092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.755070 kernel: audit: type=1006 audit(1765854975.746:802): pid=5092 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 03:16:15.746000 audit[5092]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ba5b7c0 a2=3 a3=0 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:15.761138 kernel: audit: type=1300 audit(1765854975.746:802): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ba5b7c0 a2=3 a3=0 items=0 ppid=1 pid=5092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:15.746000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:15.765071 kernel: audit: type=1327 audit(1765854975.746:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:15.768652 systemd-logind[1566]: New session 18 of user core. Dec 16 03:16:15.774182 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:16:15.777000 audit[5092]: USER_START pid=5092 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.783000 audit[5096]: CRED_ACQ pid=5096 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.788623 kernel: audit: type=1105 audit(1765854975.777:803): pid=5092 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.788777 kernel: audit: type=1103 audit(1765854975.783:804): pid=5096 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.916971 sshd[5096]: Connection closed by 147.75.109.163 port 53880 Dec 16 03:16:15.918409 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:15.920000 audit[5092]: USER_END pid=5092 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.928102 systemd[1]: sshd@16-146.190.151.166:22-147.75.109.163:53880.service: Deactivated successfully. Dec 16 03:16:15.928824 kernel: audit: type=1106 audit(1765854975.920:805): pid=5092 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.920000 audit[5092]: CRED_DISP pid=5092 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.933814 kernel: audit: type=1104 audit(1765854975.920:806): pid=5092 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:15.934165 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:16:15.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-146.190.151.166:22-147.75.109.163:53880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:15.937369 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:16:15.940254 systemd-logind[1566]: Removed session 18. Dec 16 03:16:17.169472 containerd[1591]: time="2025-12-16T03:16:17.169181822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:16:17.519081 containerd[1591]: time="2025-12-16T03:16:17.518911990Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:17.520236 containerd[1591]: time="2025-12-16T03:16:17.520166608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:16:17.520365 containerd[1591]: time="2025-12-16T03:16:17.520194243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:17.522084 kubelet[2815]: E1216 03:16:17.520515 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:16:17.522084 kubelet[2815]: E1216 03:16:17.520588 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:16:17.522977 kubelet[2815]: E1216 03:16:17.521913 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8h2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7944db5ff7-jqshs_calico-system(e282fbeb-8283-4c6d-8776-2743ee6a3341): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:17.523117 containerd[1591]: time="2025-12-16T03:16:17.522466792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:16:17.523568 kubelet[2815]: E1216 03:16:17.523508 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" podUID="e282fbeb-8283-4c6d-8776-2743ee6a3341" Dec 16 03:16:17.862931 containerd[1591]: time="2025-12-16T03:16:17.862024118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:17.881280 containerd[1591]: time="2025-12-16T03:16:17.880386315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:16:17.881280 containerd[1591]: time="2025-12-16T03:16:17.880515135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:17.881935 kubelet[2815]: E1216 03:16:17.881779 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:16:17.882282 kubelet[2815]: E1216 03:16:17.882115 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:16:17.883457 kubelet[2815]: E1216 03:16:17.883286 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4sbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b66776c65-79frt_calico-apiserver(a48216b5-38ee-49b5-8de6-29119905fab4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:17.884774 kubelet[2815]: E1216 03:16:17.884702 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:16:20.932676 systemd[1]: Started sshd@17-146.190.151.166:22-147.75.109.163:53896.service - OpenSSH per-connection server daemon (147.75.109.163:53896). Dec 16 03:16:20.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-146.190.151.166:22-147.75.109.163:53896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:20.934518 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:16:20.934624 kernel: audit: type=1130 audit(1765854980.932:808): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-146.190.151.166:22-147.75.109.163:53896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:21.064000 audit[5107]: USER_ACCT pid=5107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.068019 sshd[5107]: Accepted publickey for core from 147.75.109.163 port 53896 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:21.069762 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:21.070829 kernel: audit: type=1101 audit(1765854981.064:809): pid=5107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.067000 audit[5107]: CRED_ACQ pid=5107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.076880 kernel: audit: type=1103 audit(1765854981.067:810): pid=5107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.076990 kernel: audit: type=1006 audit(1765854981.068:811): pid=5107 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 03:16:21.068000 audit[5107]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedc631fc0 a2=3 a3=0 items=0 ppid=1 pid=5107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.080366 kernel: audit: type=1300 audit(1765854981.068:811): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedc631fc0 a2=3 a3=0 items=0 ppid=1 pid=5107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.068000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:21.084159 kernel: audit: type=1327 audit(1765854981.068:811): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:21.085897 systemd-logind[1566]: New session 19 of user core. Dec 16 03:16:21.104070 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:16:21.107000 audit[5107]: USER_START pid=5107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.114879 kernel: audit: type=1105 audit(1765854981.107:812): pid=5107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.115114 kernel: audit: type=1103 audit(1765854981.111:813): pid=5111 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.111000 audit[5111]: CRED_ACQ pid=5111 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.254136 sshd[5111]: Connection closed by 147.75.109.163 port 53896 Dec 16 03:16:21.255753 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:21.258000 audit[5107]: USER_END pid=5107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.259000 audit[5107]: CRED_DISP pid=5107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.266724 kernel: audit: type=1106 audit(1765854981.258:814): pid=5107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.266841 kernel: audit: type=1104 audit(1765854981.259:815): pid=5107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.274090 systemd[1]: sshd@17-146.190.151.166:22-147.75.109.163:53896.service: Deactivated successfully. Dec 16 03:16:21.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-146.190.151.166:22-147.75.109.163:53896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:21.278268 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:16:21.279581 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:16:21.283102 systemd-logind[1566]: Removed session 19. Dec 16 03:16:21.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-146.190.151.166:22-147.75.109.163:53902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:21.285470 systemd[1]: Started sshd@18-146.190.151.166:22-147.75.109.163:53902.service - OpenSSH per-connection server daemon (147.75.109.163:53902). Dec 16 03:16:21.386000 audit[5123]: USER_ACCT pid=5123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.387817 sshd[5123]: Accepted publickey for core from 147.75.109.163 port 53902 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:21.388000 audit[5123]: CRED_ACQ pid=5123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.388000 audit[5123]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8f307230 a2=3 a3=0 items=0 ppid=1 pid=5123 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.388000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:21.391180 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:21.399214 systemd-logind[1566]: New session 20 of user core. Dec 16 03:16:21.411159 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:16:21.416000 audit[5123]: USER_START pid=5123 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.419000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.694875 sshd[5127]: Connection closed by 147.75.109.163 port 53902 Dec 16 03:16:21.695836 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:21.698000 audit[5123]: USER_END pid=5123 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.698000 audit[5123]: CRED_DISP pid=5123 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-146.190.151.166:22-147.75.109.163:53918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:21.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-146.190.151.166:22-147.75.109.163:53902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:21.715078 systemd[1]: Started sshd@19-146.190.151.166:22-147.75.109.163:53918.service - OpenSSH per-connection server daemon (147.75.109.163:53918). Dec 16 03:16:21.717120 systemd[1]: sshd@18-146.190.151.166:22-147.75.109.163:53902.service: Deactivated successfully. Dec 16 03:16:21.721844 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:16:21.728671 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:16:21.731122 systemd-logind[1566]: Removed session 20. Dec 16 03:16:21.830000 audit[5134]: USER_ACCT pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.831387 sshd[5134]: Accepted publickey for core from 147.75.109.163 port 53918 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:21.832000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.832000 audit[5134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9463b320 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.832000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:21.834582 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:21.841439 systemd-logind[1566]: New session 21 of user core. Dec 16 03:16:21.851117 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:16:21.854000 audit[5134]: USER_START pid=5134 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:21.857000 audit[5141]: CRED_ACQ pid=5141 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:22.168351 containerd[1591]: time="2025-12-16T03:16:22.168069433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:16:22.508700 containerd[1591]: time="2025-12-16T03:16:22.508513242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:22.509589 containerd[1591]: time="2025-12-16T03:16:22.509513835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:16:22.509589 containerd[1591]: time="2025-12-16T03:16:22.509552640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:22.509952 kubelet[2815]: E1216 03:16:22.509872 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:16:22.510379 kubelet[2815]: E1216 03:16:22.509950 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:16:22.510379 kubelet[2815]: E1216 03:16:22.510139 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4b735df0f3c44edda470dc6f58182ab2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qz29d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c76c6f97-2ccvf_calico-system(4ef57fb4-ee02-432e-a890-17cb5351bf0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:22.513817 containerd[1591]: time="2025-12-16T03:16:22.513591471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:16:22.595000 audit[5152]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:22.595000 audit[5152]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdacfc5690 a2=0 a3=7ffdacfc567c items=0 ppid=2938 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:22.600000 audit[5152]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:22.600000 audit[5152]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdacfc5690 a2=0 a3=0 items=0 ppid=2938 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:22.634150 sshd[5141]: Connection closed by 147.75.109.163 port 53918 Dec 16 03:16:22.638220 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:22.643000 audit[5134]: USER_END pid=5134 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:22.646000 audit[5134]: CRED_DISP pid=5134 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:22.655106 systemd[1]: sshd@19-146.190.151.166:22-147.75.109.163:53918.service: Deactivated successfully. Dec 16 03:16:22.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-146.190.151.166:22-147.75.109.163:53918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:22.660762 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:16:22.664117 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:16:22.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-146.190.151.166:22-147.75.109.163:49912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:22.675395 systemd[1]: Started sshd@20-146.190.151.166:22-147.75.109.163:49912.service - OpenSSH per-connection server daemon (147.75.109.163:49912). Dec 16 03:16:22.677960 systemd-logind[1566]: Removed session 21. Dec 16 03:16:22.685000 audit[5156]: NETFILTER_CFG table=filter:139 family=2 entries=38 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:22.685000 audit[5156]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdb52cdb30 a2=0 a3=7ffdb52cdb1c items=0 ppid=2938 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:22.688000 audit[5156]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:22.688000 audit[5156]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb52cdb30 a2=0 a3=0 items=0 ppid=2938 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:22.828000 audit[5159]: USER_ACCT pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:22.830765 sshd[5159]: Accepted publickey for core from 147.75.109.163 port 49912 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:22.831000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:22.831000 audit[5159]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccb5a1170 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.831000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:22.834153 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:22.843447 systemd-logind[1566]: New session 22 of user core. Dec 16 03:16:22.848201 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:16:22.855000 audit[5159]: USER_START pid=5159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:22.857369 containerd[1591]: time="2025-12-16T03:16:22.857169621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:22.859491 containerd[1591]: time="2025-12-16T03:16:22.859451575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:16:22.859592 containerd[1591]: time="2025-12-16T03:16:22.859551159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:22.860470 kubelet[2815]: E1216 03:16:22.859946 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:16:22.860470 kubelet[2815]: E1216 03:16:22.860044 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:16:22.860470 kubelet[2815]: E1216 03:16:22.860341 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz29d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c76c6f97-2ccvf_calico-system(4ef57fb4-ee02-432e-a890-17cb5351bf0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:22.861685 kubelet[2815]: E1216 03:16:22.861631 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c76c6f97-2ccvf" podUID="4ef57fb4-ee02-432e-a890-17cb5351bf0b" Dec 16 03:16:22.860000 audit[5163]: CRED_ACQ pid=5163 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.165551 kubelet[2815]: E1216 03:16:23.165105 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:16:23.266833 sshd[5163]: Connection closed by 147.75.109.163 port 49912 Dec 16 03:16:23.266661 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:23.271000 audit[5159]: USER_END pid=5159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.272000 audit[5159]: CRED_DISP pid=5159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.284308 systemd[1]: sshd@20-146.190.151.166:22-147.75.109.163:49912.service: Deactivated successfully. Dec 16 03:16:23.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-146.190.151.166:22-147.75.109.163:49912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.292265 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:16:23.297295 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:16:23.303676 systemd[1]: Started sshd@21-146.190.151.166:22-147.75.109.163:49926.service - OpenSSH per-connection server daemon (147.75.109.163:49926). Dec 16 03:16:23.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-146.190.151.166:22-147.75.109.163:49926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.305767 systemd-logind[1566]: Removed session 22. Dec 16 03:16:23.400000 audit[5173]: USER_ACCT pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.401832 sshd[5173]: Accepted publickey for core from 147.75.109.163 port 49926 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:23.403000 audit[5173]: CRED_ACQ pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.403000 audit[5173]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd1573e30 a2=3 a3=0 items=0 ppid=1 pid=5173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:23.403000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:23.405644 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:23.418918 systemd-logind[1566]: New session 23 of user core. Dec 16 03:16:23.422212 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:16:23.427000 audit[5173]: USER_START pid=5173 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.431000 audit[5177]: CRED_ACQ pid=5177 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.581751 sshd[5177]: Connection closed by 147.75.109.163 port 49926 Dec 16 03:16:23.585015 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:23.586000 audit[5173]: USER_END pid=5173 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.586000 audit[5173]: CRED_DISP pid=5173 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:23.591127 systemd[1]: sshd@21-146.190.151.166:22-147.75.109.163:49926.service: Deactivated successfully. Dec 16 03:16:23.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-146.190.151.166:22-147.75.109.163:49926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.598077 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:16:23.603811 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:16:23.608469 systemd-logind[1566]: Removed session 23. Dec 16 03:16:25.166484 containerd[1591]: time="2025-12-16T03:16:25.166338643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:16:25.490820 containerd[1591]: time="2025-12-16T03:16:25.490618199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:25.492027 containerd[1591]: time="2025-12-16T03:16:25.491930858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:16:25.492288 containerd[1591]: time="2025-12-16T03:16:25.491952581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:25.492359 kubelet[2815]: E1216 03:16:25.492280 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:16:25.492359 kubelet[2815]: E1216 03:16:25.492347 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:16:25.493905 kubelet[2815]: E1216 03:16:25.493828 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9g7b7_calico-system(b1b86abd-1663-4902-987a-286c262c84b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:25.495294 kubelet[2815]: E1216 03:16:25.495255 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9g7b7" podUID="b1b86abd-1663-4902-987a-286c262c84b0" Dec 16 03:16:26.167815 kubelet[2815]: E1216 03:16:26.167724 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:16:26.171278 containerd[1591]: time="2025-12-16T03:16:26.171133546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:16:26.463500 containerd[1591]: time="2025-12-16T03:16:26.463208550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:26.464386 containerd[1591]: time="2025-12-16T03:16:26.464258424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:16:26.464386 containerd[1591]: time="2025-12-16T03:16:26.464324295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:26.464560 kubelet[2815]: E1216 03:16:26.464501 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:16:26.464560 kubelet[2815]: E1216 03:16:26.464551 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:16:26.465206 kubelet[2815]: E1216 03:16:26.464874 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59mz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jz64m_calico-system(2f0d2d2c-6593-4c7a-9cdf-35214b834c16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:26.466483 containerd[1591]: time="2025-12-16T03:16:26.466306612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:16:26.775098 containerd[1591]: time="2025-12-16T03:16:26.774908501Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:26.775952 containerd[1591]: time="2025-12-16T03:16:26.775853884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:16:26.776060 containerd[1591]: time="2025-12-16T03:16:26.775944780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:26.776208 kubelet[2815]: E1216 03:16:26.776173 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:16:26.776559 kubelet[2815]: E1216 03:16:26.776225 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:16:26.776921 kubelet[2815]: E1216 03:16:26.776621 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw7kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b66776c65-kfd2d_calico-apiserver(3fd5d06e-1026-4a73-b6dc-d83cdf5f2977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:26.777627 containerd[1591]: time="2025-12-16T03:16:26.777308184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:16:26.778589 kubelet[2815]: E1216 03:16:26.778098 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" podUID="3fd5d06e-1026-4a73-b6dc-d83cdf5f2977" Dec 16 03:16:27.082822 containerd[1591]: time="2025-12-16T03:16:27.082647316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:27.084270 containerd[1591]: time="2025-12-16T03:16:27.084187073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:16:27.084527 containerd[1591]: time="2025-12-16T03:16:27.084233261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:27.084954 kubelet[2815]: E1216 03:16:27.084778 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:16:27.084954 kubelet[2815]: E1216 03:16:27.084903 2815 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:16:27.087096 kubelet[2815]: E1216 03:16:27.086979 2815 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59mz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jz64m_calico-system(2f0d2d2c-6593-4c7a-9cdf-35214b834c16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:27.088343 kubelet[2815]: E1216 03:16:27.088262 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:16:28.606909 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 03:16:28.607077 kernel: audit: type=1130 audit(1765854988.601:857): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-146.190.151.166:22-147.75.109.163:49942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:28.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-146.190.151.166:22-147.75.109.163:49942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:28.602690 systemd[1]: Started sshd@22-146.190.151.166:22-147.75.109.163:49942.service - OpenSSH per-connection server daemon (147.75.109.163:49942). Dec 16 03:16:28.716000 audit[5196]: USER_ACCT pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.718945 sshd[5196]: Accepted publickey for core from 147.75.109.163 port 49942 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:28.724363 kernel: audit: type=1101 audit(1765854988.716:858): pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.724489 kernel: audit: type=1103 audit(1765854988.719:859): pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.719000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.724159 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:28.719000 audit[5196]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd74da550 a2=3 a3=0 items=0 ppid=1 pid=5196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:28.735931 kernel: audit: type=1006 audit(1765854988.719:860): pid=5196 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 03:16:28.736097 kernel: audit: type=1300 audit(1765854988.719:860): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd74da550 a2=3 a3=0 items=0 ppid=1 pid=5196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:28.719000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:28.740867 kernel: audit: type=1327 audit(1765854988.719:860): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:28.746595 systemd-logind[1566]: New session 24 of user core. Dec 16 03:16:28.750080 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 03:16:28.755000 audit[5196]: USER_START pid=5196 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.762000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.766975 kernel: audit: type=1105 audit(1765854988.755:861): pid=5196 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.767112 kernel: audit: type=1103 audit(1765854988.762:862): pid=5200 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.912828 sshd[5200]: Connection closed by 147.75.109.163 port 49942 Dec 16 03:16:28.913963 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:28.913000 audit[5196]: USER_END pid=5196 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.922621 systemd[1]: sshd@22-146.190.151.166:22-147.75.109.163:49942.service: Deactivated successfully. Dec 16 03:16:28.922917 kernel: audit: type=1106 audit(1765854988.913:863): pid=5196 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.925465 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 03:16:28.913000 audit[5196]: CRED_DISP pid=5196 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.930212 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Dec 16 03:16:28.931514 kernel: audit: type=1104 audit(1765854988.913:864): pid=5196 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:28.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-146.190.151.166:22-147.75.109.163:49942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:28.933861 systemd-logind[1566]: Removed session 24. Dec 16 03:16:29.167845 kubelet[2815]: E1216 03:16:29.166633 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:16:29.291000 audit[5212]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:29.291000 audit[5212]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff1e6b1ef0 a2=0 a3=7fff1e6b1edc items=0 ppid=2938 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:29.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:29.296000 audit[5212]: NETFILTER_CFG table=nat:142 family=2 entries=104 op=nft_register_chain pid=5212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:29.296000 audit[5212]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff1e6b1ef0 a2=0 a3=7fff1e6b1edc items=0 ppid=2938 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:29.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:32.168358 kubelet[2815]: E1216 03:16:32.168315 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:16:33.166222 kubelet[2815]: E1216 03:16:33.166078 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7944db5ff7-jqshs" podUID="e282fbeb-8283-4c6d-8776-2743ee6a3341" Dec 16 03:16:33.929156 systemd[1]: Started sshd@23-146.190.151.166:22-147.75.109.163:47844.service - OpenSSH per-connection server daemon (147.75.109.163:47844). Dec 16 03:16:33.935717 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:16:33.935850 kernel: audit: type=1130 audit(1765854993.927:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-146.190.151.166:22-147.75.109.163:47844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-146.190.151.166:22-147.75.109.163:47844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.045000 audit[5216]: USER_ACCT pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.048069 sshd[5216]: Accepted publickey for core from 147.75.109.163 port 47844 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:34.051008 sshd-session[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:34.054826 kernel: audit: type=1101 audit(1765854994.045:869): pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.047000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.063802 kernel: audit: type=1103 audit(1765854994.047:870): pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.063877 kernel: audit: type=1006 audit(1765854994.048:871): pid=5216 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 03:16:34.066475 kernel: audit: type=1300 audit(1765854994.048:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8856aaf0 a2=3 a3=0 items=0 ppid=1 pid=5216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.048000 audit[5216]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8856aaf0 a2=3 a3=0 items=0 ppid=1 pid=5216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.064884 systemd-logind[1566]: New session 25 of user core. Dec 16 03:16:34.048000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:34.073645 kernel: audit: type=1327 audit(1765854994.048:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:34.077074 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 03:16:34.081000 audit[5216]: USER_START pid=5216 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.084000 audit[5220]: CRED_ACQ pid=5220 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.094901 kernel: audit: type=1105 audit(1765854994.081:872): pid=5216 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.095044 kernel: audit: type=1103 audit(1765854994.084:873): pid=5220 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.212959 sshd[5220]: Connection closed by 147.75.109.163 port 47844 Dec 16 03:16:34.214209 sshd-session[5216]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:34.214000 audit[5216]: USER_END pid=5216 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.222277 kernel: audit: type=1106 audit(1765854994.214:874): pid=5216 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.222237 systemd[1]: sshd@23-146.190.151.166:22-147.75.109.163:47844.service: Deactivated successfully. Dec 16 03:16:34.214000 audit[5216]: CRED_DISP pid=5216 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.225382 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 03:16:34.230445 kernel: audit: type=1104 audit(1765854994.214:875): pid=5216 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:34.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-146.190.151.166:22-147.75.109.163:47844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.231925 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Dec 16 03:16:34.233560 systemd-logind[1566]: Removed session 25. Dec 16 03:16:35.169508 kubelet[2815]: E1216 03:16:35.169448 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c76c6f97-2ccvf" podUID="4ef57fb4-ee02-432e-a890-17cb5351bf0b" Dec 16 03:16:37.651849 kubelet[2815]: E1216 03:16:37.651441 2815 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 16 03:16:39.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-146.190.151.166:22-147.75.109.163:47850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:39.235347 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:16:39.235385 kernel: audit: type=1130 audit(1765854999.232:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-146.190.151.166:22-147.75.109.163:47850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:39.234029 systemd[1]: Started sshd@24-146.190.151.166:22-147.75.109.163:47850.service - OpenSSH per-connection server daemon (147.75.109.163:47850). Dec 16 03:16:39.337000 audit[5256]: USER_ACCT pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.341963 sshd[5256]: Accepted publickey for core from 147.75.109.163 port 47850 ssh2: RSA SHA256:Wcr2uLEdDIbefoXx2hSQFmlZRB0VXpShwRrgfwS10H8 Dec 16 03:16:39.344956 kernel: audit: type=1101 audit(1765854999.337:878): pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.343000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.348281 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:39.351189 kernel: audit: type=1103 audit(1765854999.343:879): pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.352145 kernel: audit: type=1006 audit(1765854999.343:880): pid=5256 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 03:16:39.343000 audit[5256]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc3bfe270 a2=3 a3=0 items=0 ppid=1 pid=5256 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:39.363348 kernel: audit: type=1300 audit(1765854999.343:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc3bfe270 a2=3 a3=0 items=0 ppid=1 pid=5256 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:39.363588 kernel: audit: type=1327 audit(1765854999.343:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:39.343000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:39.360546 systemd-logind[1566]: New session 26 of user core. Dec 16 03:16:39.367032 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 03:16:39.372000 audit[5256]: USER_START pid=5256 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.379896 kernel: audit: type=1105 audit(1765854999.372:881): pid=5256 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.378000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.386813 kernel: audit: type=1103 audit(1765854999.378:882): pid=5260 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.551896 sshd[5260]: Connection closed by 147.75.109.163 port 47850 Dec 16 03:16:39.552607 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:39.555000 audit[5256]: USER_END pid=5256 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.562319 systemd[1]: sshd@24-146.190.151.166:22-147.75.109.163:47850.service: Deactivated successfully. Dec 16 03:16:39.563816 kernel: audit: type=1106 audit(1765854999.555:883): pid=5256 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.555000 audit[5256]: CRED_DISP pid=5256 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.566672 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 03:16:39.571820 kernel: audit: type=1104 audit(1765854999.555:884): pid=5256 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:16:39.573011 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Dec 16 03:16:39.575105 systemd-logind[1566]: Removed session 26. Dec 16 03:16:39.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-146.190.151.166:22-147.75.109.163:47850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:40.167824 kubelet[2815]: E1216 03:16:40.167011 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9g7b7" podUID="b1b86abd-1663-4902-987a-286c262c84b0" Dec 16 03:16:40.170121 kubelet[2815]: E1216 03:16:40.170067 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jz64m" podUID="2f0d2d2c-6593-4c7a-9cdf-35214b834c16" Dec 16 03:16:42.168914 kubelet[2815]: E1216 03:16:42.168356 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-79frt" podUID="a48216b5-38ee-49b5-8de6-29119905fab4" Dec 16 03:16:42.171718 kubelet[2815]: E1216 03:16:42.170361 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b66776c65-kfd2d" podUID="3fd5d06e-1026-4a73-b6dc-d83cdf5f2977"