Dec 15 09:00:39.008211 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Sat Dec 13 21:00:48 -00 2025 Dec 15 09:00:39.008234 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d561a37bffac8b0c131876e06d146d4110f86fb5be729a7772a95bc6d82652ce Dec 15 09:00:39.008247 kernel: BIOS-provided physical RAM map: Dec 15 09:00:39.008255 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 15 09:00:39.008263 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 15 09:00:39.008271 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 15 09:00:39.008279 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 15 09:00:39.008286 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 15 09:00:39.008293 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 15 09:00:39.008300 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 15 09:00:39.008309 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 15 09:00:39.008316 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 15 09:00:39.008323 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 15 09:00:39.008330 kernel: NX (Execute Disable) protection: active Dec 15 09:00:39.008338 kernel: APIC: Static calls initialized Dec 15 09:00:39.008348 kernel: SMBIOS 2.8 present. Dec 15 09:00:39.008355 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 15 09:00:39.008363 kernel: DMI: Memory slots populated: 1/1 Dec 15 09:00:39.008370 kernel: Hypervisor detected: KVM Dec 15 09:00:39.008377 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 15 09:00:39.008385 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 15 09:00:39.008392 kernel: kvm-clock: using sched offset of 3321779262 cycles Dec 15 09:00:39.008400 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 15 09:00:39.008408 kernel: tsc: Detected 2794.748 MHz processor Dec 15 09:00:39.008416 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 15 09:00:39.008434 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 15 09:00:39.008442 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 15 09:00:39.008450 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 15 09:00:39.008458 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 15 09:00:39.008466 kernel: Using GB pages for direct mapping Dec 15 09:00:39.008474 kernel: ACPI: Early table checksum verification disabled Dec 15 09:00:39.008482 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 15 09:00:39.008492 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 15 09:00:39.008502 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 15 09:00:39.008512 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 15 09:00:39.008523 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 15 09:00:39.008533 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 15 09:00:39.008541 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 15 09:00:39.008549 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 15 09:00:39.008559 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 15 09:00:39.008570 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Dec 15 09:00:39.008579 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Dec 15 09:00:39.008587 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 15 09:00:39.008595 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Dec 15 09:00:39.008605 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Dec 15 09:00:39.008613 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Dec 15 09:00:39.008621 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Dec 15 09:00:39.008628 kernel: No NUMA configuration found Dec 15 09:00:39.008636 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 15 09:00:39.008644 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Dec 15 09:00:39.008655 kernel: Zone ranges: Dec 15 09:00:39.008663 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 15 09:00:39.008671 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 15 09:00:39.008679 kernel: Normal empty Dec 15 09:00:39.008687 kernel: Device empty Dec 15 09:00:39.008695 kernel: Movable zone start for each node Dec 15 09:00:39.008703 kernel: Early memory node ranges Dec 15 09:00:39.008710 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 15 09:00:39.008720 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 15 09:00:39.008729 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 15 09:00:39.008736 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 15 09:00:39.008744 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 15 09:00:39.008753 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 15 09:00:39.008761 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 15 09:00:39.008769 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 15 09:00:39.008777 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 15 09:00:39.008787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 15 09:00:39.008795 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 15 09:00:39.008823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 15 09:00:39.008832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 15 09:00:39.008840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 15 09:00:39.008848 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 15 09:00:39.008856 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 15 09:00:39.008866 kernel: TSC deadline timer available Dec 15 09:00:39.008874 kernel: CPU topo: Max. logical packages: 1 Dec 15 09:00:39.008882 kernel: CPU topo: Max. logical dies: 1 Dec 15 09:00:39.008890 kernel: CPU topo: Max. dies per package: 1 Dec 15 09:00:39.008898 kernel: CPU topo: Max. threads per core: 1 Dec 15 09:00:39.008906 kernel: CPU topo: Num. cores per package: 4 Dec 15 09:00:39.008914 kernel: CPU topo: Num. threads per package: 4 Dec 15 09:00:39.008924 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 15 09:00:39.008932 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 15 09:00:39.008940 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 15 09:00:39.008948 kernel: kvm-guest: setup PV sched yield Dec 15 09:00:39.008956 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 15 09:00:39.008964 kernel: Booting paravirtualized kernel on KVM Dec 15 09:00:39.008972 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 15 09:00:39.008980 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 15 09:00:39.008991 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 15 09:00:39.008999 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 15 09:00:39.009007 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 15 09:00:39.009015 kernel: kvm-guest: PV spinlocks enabled Dec 15 09:00:39.009023 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 15 09:00:39.009033 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d561a37bffac8b0c131876e06d146d4110f86fb5be729a7772a95bc6d82652ce Dec 15 09:00:39.009041 kernel: random: crng init done Dec 15 09:00:39.009051 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 15 09:00:39.009060 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 15 09:00:39.009068 kernel: Fallback order for Node 0: 0 Dec 15 09:00:39.009076 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Dec 15 09:00:39.009084 kernel: Policy zone: DMA32 Dec 15 09:00:39.009092 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 15 09:00:39.009100 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 15 09:00:39.009110 kernel: ftrace: allocating 40103 entries in 157 pages Dec 15 09:00:39.009119 kernel: ftrace: allocated 157 pages with 5 groups Dec 15 09:00:39.009127 kernel: Dynamic Preempt: voluntary Dec 15 09:00:39.009135 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 15 09:00:39.009147 kernel: rcu: RCU event tracing is enabled. Dec 15 09:00:39.009156 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 15 09:00:39.009164 kernel: Trampoline variant of Tasks RCU enabled. Dec 15 09:00:39.009174 kernel: Rude variant of Tasks RCU enabled. Dec 15 09:00:39.009181 kernel: Tracing variant of Tasks RCU enabled. Dec 15 09:00:39.009190 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 15 09:00:39.009198 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 15 09:00:39.009206 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 15 09:00:39.009214 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 15 09:00:39.009222 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 15 09:00:39.009230 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 15 09:00:39.009241 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 15 09:00:39.009255 kernel: Console: colour VGA+ 80x25 Dec 15 09:00:39.009266 kernel: printk: legacy console [ttyS0] enabled Dec 15 09:00:39.009274 kernel: ACPI: Core revision 20240827 Dec 15 09:00:39.009283 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 15 09:00:39.009291 kernel: APIC: Switch to symmetric I/O mode setup Dec 15 09:00:39.009300 kernel: x2apic enabled Dec 15 09:00:39.009308 kernel: APIC: Switched APIC routing to: physical x2apic Dec 15 09:00:39.009317 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 15 09:00:39.009328 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 15 09:00:39.009336 kernel: kvm-guest: setup PV IPIs Dec 15 09:00:39.009344 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 15 09:00:39.009353 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 15 09:00:39.009364 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 15 09:00:39.009372 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 15 09:00:39.009381 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 15 09:00:39.009389 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 15 09:00:39.009398 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 15 09:00:39.009406 kernel: Spectre V2 : Mitigation: Retpolines Dec 15 09:00:39.009414 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 15 09:00:39.009432 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 15 09:00:39.009440 kernel: active return thunk: retbleed_return_thunk Dec 15 09:00:39.009449 kernel: RETBleed: Mitigation: untrained return thunk Dec 15 09:00:39.009457 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 15 09:00:39.009466 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 15 09:00:39.009475 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 15 09:00:39.009484 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 15 09:00:39.009495 kernel: active return thunk: srso_return_thunk Dec 15 09:00:39.009506 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 15 09:00:39.009518 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 15 09:00:39.009529 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 15 09:00:39.009539 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 15 09:00:39.009548 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 15 09:00:39.009559 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 15 09:00:39.009567 kernel: Freeing SMP alternatives memory: 32K Dec 15 09:00:39.009575 kernel: pid_max: default: 32768 minimum: 301 Dec 15 09:00:39.009584 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 15 09:00:39.009592 kernel: landlock: Up and running. Dec 15 09:00:39.009600 kernel: SELinux: Initializing. Dec 15 09:00:39.009609 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 15 09:00:39.009617 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 15 09:00:39.009628 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 15 09:00:39.009637 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 15 09:00:39.009645 kernel: ... version: 0 Dec 15 09:00:39.009654 kernel: ... bit width: 48 Dec 15 09:00:39.009662 kernel: ... generic registers: 6 Dec 15 09:00:39.009670 kernel: ... value mask: 0000ffffffffffff Dec 15 09:00:39.009679 kernel: ... max period: 00007fffffffffff Dec 15 09:00:39.009690 kernel: ... fixed-purpose events: 0 Dec 15 09:00:39.009698 kernel: ... event mask: 000000000000003f Dec 15 09:00:39.009706 kernel: signal: max sigframe size: 1776 Dec 15 09:00:39.009715 kernel: rcu: Hierarchical SRCU implementation. Dec 15 09:00:39.009723 kernel: rcu: Max phase no-delay instances is 400. Dec 15 09:00:39.009732 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 15 09:00:39.009740 kernel: smp: Bringing up secondary CPUs ... Dec 15 09:00:39.009751 kernel: smpboot: x86: Booting SMP configuration: Dec 15 09:00:39.009759 kernel: .... node #0, CPUs: #1 #2 #3 Dec 15 09:00:39.009768 kernel: smp: Brought up 1 node, 4 CPUs Dec 15 09:00:39.009776 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 15 09:00:39.009785 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15560K init, 2480K bss, 120520K reserved, 0K cma-reserved) Dec 15 09:00:39.009793 kernel: devtmpfs: initialized Dec 15 09:00:39.009824 kernel: x86/mm: Memory block size: 128MB Dec 15 09:00:39.009836 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 15 09:00:39.009844 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 15 09:00:39.009853 kernel: pinctrl core: initialized pinctrl subsystem Dec 15 09:00:39.009861 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 15 09:00:39.009870 kernel: audit: initializing netlink subsys (disabled) Dec 15 09:00:39.009879 kernel: audit: type=2000 audit(1765789235.480:1): state=initialized audit_enabled=0 res=1 Dec 15 09:00:39.009887 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 15 09:00:39.009898 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 15 09:00:39.009906 kernel: cpuidle: using governor menu Dec 15 09:00:39.009914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 15 09:00:39.009923 kernel: dca service started, version 1.12.1 Dec 15 09:00:39.009931 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 15 09:00:39.009940 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 15 09:00:39.009948 kernel: PCI: Using configuration type 1 for base access Dec 15 09:00:39.009959 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 15 09:00:39.009968 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 15 09:00:39.009976 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 15 09:00:39.009985 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 15 09:00:39.009993 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 15 09:00:39.010001 kernel: ACPI: Added _OSI(Module Device) Dec 15 09:00:39.010010 kernel: ACPI: Added _OSI(Processor Device) Dec 15 09:00:39.010020 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 15 09:00:39.010029 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 15 09:00:39.010037 kernel: ACPI: Interpreter enabled Dec 15 09:00:39.010045 kernel: ACPI: PM: (supports S0 S3 S5) Dec 15 09:00:39.010054 kernel: ACPI: Using IOAPIC for interrupt routing Dec 15 09:00:39.010062 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 15 09:00:39.010071 kernel: PCI: Using E820 reservations for host bridge windows Dec 15 09:00:39.010082 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 15 09:00:39.010090 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 15 09:00:39.010373 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 15 09:00:39.010576 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 15 09:00:39.010752 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 15 09:00:39.010763 kernel: PCI host bridge to bus 0000:00 Dec 15 09:00:39.010960 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 15 09:00:39.011113 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 15 09:00:39.011268 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 15 09:00:39.011418 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 15 09:00:39.011590 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 15 09:00:39.011744 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 15 09:00:39.011919 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 15 09:00:39.012104 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 15 09:00:39.012286 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 15 09:00:39.012463 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 15 09:00:39.012638 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 15 09:00:39.012820 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 15 09:00:39.012988 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 15 09:00:39.013162 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 15 09:00:39.013328 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Dec 15 09:00:39.013504 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 15 09:00:39.013681 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 15 09:00:39.013884 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 15 09:00:39.014053 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Dec 15 09:00:39.014219 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 15 09:00:39.014384 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 15 09:00:39.014576 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 15 09:00:39.014749 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Dec 15 09:00:39.014931 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Dec 15 09:00:39.015095 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 15 09:00:39.015262 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 15 09:00:39.015444 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 15 09:00:39.015621 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 15 09:00:39.015819 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 15 09:00:39.015988 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Dec 15 09:00:39.016153 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Dec 15 09:00:39.016329 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 15 09:00:39.016508 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 15 09:00:39.016524 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 15 09:00:39.016539 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 15 09:00:39.016547 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 15 09:00:39.016556 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 15 09:00:39.016564 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 15 09:00:39.016573 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 15 09:00:39.016581 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 15 09:00:39.016590 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 15 09:00:39.016600 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 15 09:00:39.016609 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 15 09:00:39.016617 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 15 09:00:39.016626 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 15 09:00:39.016634 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 15 09:00:39.016642 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 15 09:00:39.016651 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 15 09:00:39.016662 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 15 09:00:39.016670 kernel: iommu: Default domain type: Translated Dec 15 09:00:39.016679 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 15 09:00:39.016687 kernel: PCI: Using ACPI for IRQ routing Dec 15 09:00:39.016696 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 15 09:00:39.016704 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 15 09:00:39.016713 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 15 09:00:39.016902 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 15 09:00:39.017067 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 15 09:00:39.017229 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 15 09:00:39.017240 kernel: vgaarb: loaded Dec 15 09:00:39.017249 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 15 09:00:39.017259 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 15 09:00:39.017269 kernel: clocksource: Switched to clocksource kvm-clock Dec 15 09:00:39.017283 kernel: VFS: Disk quotas dquot_6.6.0 Dec 15 09:00:39.017292 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 15 09:00:39.017300 kernel: pnp: PnP ACPI init Dec 15 09:00:39.017484 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 15 09:00:39.017498 kernel: pnp: PnP ACPI: found 6 devices Dec 15 09:00:39.017509 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 15 09:00:39.017525 kernel: NET: Registered PF_INET protocol family Dec 15 09:00:39.017537 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 15 09:00:39.017546 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 15 09:00:39.017554 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 15 09:00:39.017563 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 15 09:00:39.017571 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 15 09:00:39.017580 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 15 09:00:39.017591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 15 09:00:39.017600 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 15 09:00:39.017609 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 15 09:00:39.017617 kernel: NET: Registered PF_XDP protocol family Dec 15 09:00:39.017838 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 15 09:00:39.017998 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 15 09:00:39.018151 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 15 09:00:39.018310 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 15 09:00:39.018473 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 15 09:00:39.018642 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 15 09:00:39.018656 kernel: PCI: CLS 0 bytes, default 64 Dec 15 09:00:39.018665 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 15 09:00:39.018673 kernel: Initialise system trusted keyrings Dec 15 09:00:39.018682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 15 09:00:39.018694 kernel: Key type asymmetric registered Dec 15 09:00:39.018703 kernel: Asymmetric key parser 'x509' registered Dec 15 09:00:39.018711 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 15 09:00:39.018720 kernel: io scheduler mq-deadline registered Dec 15 09:00:39.018728 kernel: io scheduler kyber registered Dec 15 09:00:39.018737 kernel: io scheduler bfq registered Dec 15 09:00:39.018745 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 15 09:00:39.018757 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 15 09:00:39.018766 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 15 09:00:39.018774 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 15 09:00:39.018783 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 15 09:00:39.018791 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 15 09:00:39.018800 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 15 09:00:39.018830 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 15 09:00:39.018842 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 15 09:00:39.018851 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 15 09:00:39.019021 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 15 09:00:39.019178 kernel: rtc_cmos 00:04: registered as rtc0 Dec 15 09:00:39.019333 kernel: rtc_cmos 00:04: setting system clock to 2025-12-15T09:00:37 UTC (1765789237) Dec 15 09:00:39.019499 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 15 09:00:39.019518 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 15 09:00:39.019530 kernel: NET: Registered PF_INET6 protocol family Dec 15 09:00:39.019539 kernel: Segment Routing with IPv6 Dec 15 09:00:39.019548 kernel: In-situ OAM (IOAM) with IPv6 Dec 15 09:00:39.019557 kernel: NET: Registered PF_PACKET protocol family Dec 15 09:00:39.019565 kernel: Key type dns_resolver registered Dec 15 09:00:39.019574 kernel: IPI shorthand broadcast: enabled Dec 15 09:00:39.019585 kernel: sched_clock: Marking stable (1600002789, 200523646)->(1921224702, -120698267) Dec 15 09:00:39.019593 kernel: registered taskstats version 1 Dec 15 09:00:39.019602 kernel: Loading compiled-in X.509 certificates Dec 15 09:00:39.019610 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 06a38b380d44d07d04cfbe1e7c6c1ac507d2455e' Dec 15 09:00:39.019619 kernel: Demotion targets for Node 0: null Dec 15 09:00:39.019628 kernel: Key type .fscrypt registered Dec 15 09:00:39.019636 kernel: Key type fscrypt-provisioning registered Dec 15 09:00:39.019644 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 15 09:00:39.019655 kernel: ima: Allocated hash algorithm: sha1 Dec 15 09:00:39.019664 kernel: ima: No architecture policies found Dec 15 09:00:39.019672 kernel: clk: Disabling unused clocks Dec 15 09:00:39.019681 kernel: Freeing unused kernel image (initmem) memory: 15560K Dec 15 09:00:39.019689 kernel: Write protecting the kernel read-only data: 47104k Dec 15 09:00:39.019698 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 15 09:00:39.019706 kernel: Run /init as init process Dec 15 09:00:39.019717 kernel: with arguments: Dec 15 09:00:39.019726 kernel: /init Dec 15 09:00:39.019734 kernel: with environment: Dec 15 09:00:39.019743 kernel: HOME=/ Dec 15 09:00:39.019751 kernel: TERM=linux Dec 15 09:00:39.019759 kernel: SCSI subsystem initialized Dec 15 09:00:39.019768 kernel: libata version 3.00 loaded. Dec 15 09:00:39.019956 kernel: ahci 0000:00:1f.2: version 3.0 Dec 15 09:00:39.019987 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 15 09:00:39.020155 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 15 09:00:39.020320 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 15 09:00:39.020492 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 15 09:00:39.020688 kernel: scsi host0: ahci Dec 15 09:00:39.020885 kernel: scsi host1: ahci Dec 15 09:00:39.021065 kernel: scsi host2: ahci Dec 15 09:00:39.021240 kernel: scsi host3: ahci Dec 15 09:00:39.021414 kernel: scsi host4: ahci Dec 15 09:00:39.021610 kernel: scsi host5: ahci Dec 15 09:00:39.021627 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Dec 15 09:00:39.021637 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Dec 15 09:00:39.021651 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Dec 15 09:00:39.021664 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Dec 15 09:00:39.021680 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Dec 15 09:00:39.021700 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Dec 15 09:00:39.021726 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 15 09:00:39.021746 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 15 09:00:39.021763 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 15 09:00:39.021783 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 15 09:00:39.021814 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 15 09:00:39.021834 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 15 09:00:39.021854 kernel: ata3.00: LPM support broken, forcing max_power Dec 15 09:00:39.021873 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 15 09:00:39.021896 kernel: ata3.00: applying bridge limits Dec 15 09:00:39.021916 kernel: ata3.00: LPM support broken, forcing max_power Dec 15 09:00:39.021935 kernel: ata3.00: configured for UDMA/100 Dec 15 09:00:39.022378 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 15 09:00:39.022697 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 15 09:00:39.022945 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 15 09:00:39.022963 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 15 09:00:39.022972 kernel: GPT:16515071 != 27000831 Dec 15 09:00:39.022981 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 15 09:00:39.022989 kernel: GPT:16515071 != 27000831 Dec 15 09:00:39.022998 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 15 09:00:39.023006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 15 09:00:39.023188 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 15 09:00:39.023203 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 15 09:00:39.023389 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 15 09:00:39.023402 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 15 09:00:39.023411 kernel: device-mapper: uevent: version 1.0.3 Dec 15 09:00:39.023419 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 15 09:00:39.023437 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 15 09:00:39.023449 kernel: raid6: avx2x4 gen() 30590 MB/s Dec 15 09:00:39.023458 kernel: raid6: avx2x2 gen() 31442 MB/s Dec 15 09:00:39.023467 kernel: raid6: avx2x1 gen() 25958 MB/s Dec 15 09:00:39.023476 kernel: raid6: using algorithm avx2x2 gen() 31442 MB/s Dec 15 09:00:39.023485 kernel: raid6: .... xor() 19945 MB/s, rmw enabled Dec 15 09:00:39.023496 kernel: raid6: using avx2x2 recovery algorithm Dec 15 09:00:39.023508 kernel: xor: automatically using best checksumming function avx Dec 15 09:00:39.023548 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 15 09:00:39.023558 kernel: BTRFS: device fsid 54e518d7-91ad-406b-a384-28d306fbc024 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (183) Dec 15 09:00:39.023567 kernel: BTRFS info (device dm-0): first mount of filesystem 54e518d7-91ad-406b-a384-28d306fbc024 Dec 15 09:00:39.023576 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 15 09:00:39.023584 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 15 09:00:39.023596 kernel: BTRFS info (device dm-0): enabling free space tree Dec 15 09:00:39.023605 kernel: loop: module loaded Dec 15 09:00:39.023614 kernel: loop0: detected capacity change from 0 to 100528 Dec 15 09:00:39.023623 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 15 09:00:39.023633 systemd[1]: Successfully made /usr/ read-only. Dec 15 09:00:39.023644 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 15 09:00:39.023657 systemd[1]: Detected virtualization kvm. Dec 15 09:00:39.023665 systemd[1]: Detected architecture x86-64. Dec 15 09:00:39.023674 systemd[1]: Running in initrd. Dec 15 09:00:39.023683 systemd[1]: No hostname configured, using default hostname. Dec 15 09:00:39.023695 systemd[1]: Hostname set to . Dec 15 09:00:39.023704 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 15 09:00:39.023715 systemd[1]: Queued start job for default target initrd.target. Dec 15 09:00:39.023725 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 15 09:00:39.023734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 15 09:00:39.023744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 15 09:00:39.023753 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 15 09:00:39.023763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 15 09:00:39.023776 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 15 09:00:39.023785 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 15 09:00:39.023795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 15 09:00:39.023825 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 15 09:00:39.023835 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 15 09:00:39.023844 systemd[1]: Reached target paths.target - Path Units. Dec 15 09:00:39.023853 systemd[1]: Reached target slices.target - Slice Units. Dec 15 09:00:39.023865 systemd[1]: Reached target swap.target - Swaps. Dec 15 09:00:39.023874 systemd[1]: Reached target timers.target - Timer Units. Dec 15 09:00:39.023884 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 15 09:00:39.023893 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 15 09:00:39.023903 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 15 09:00:39.023912 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 15 09:00:39.023923 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 15 09:00:39.023933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 15 09:00:39.023942 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 15 09:00:39.023952 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 15 09:00:39.023961 systemd[1]: Reached target sockets.target - Socket Units. Dec 15 09:00:39.023970 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 15 09:00:39.023979 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 15 09:00:39.023991 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 15 09:00:39.024000 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 15 09:00:39.024010 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 15 09:00:39.024019 systemd[1]: Starting systemd-fsck-usr.service... Dec 15 09:00:39.024029 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 15 09:00:39.024038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 15 09:00:39.024050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 15 09:00:39.024059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 15 09:00:39.024071 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 15 09:00:39.024080 systemd[1]: Finished systemd-fsck-usr.service. Dec 15 09:00:39.024091 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 15 09:00:39.024123 systemd-journald[317]: Collecting audit messages is enabled. Dec 15 09:00:39.024144 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 15 09:00:39.024156 systemd-journald[317]: Journal started Dec 15 09:00:39.024180 systemd-journald[317]: Runtime Journal (/run/log/journal/09cfc80ec1c14eeca9f5d3fd77cb1371) is 6M, max 48.2M, 42.1M free. Dec 15 09:00:39.032603 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 15 09:00:39.032634 kernel: audit: type=1130 audit(1765789239.027:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.032647 systemd[1]: Started systemd-journald.service - Journal Service. Dec 15 09:00:39.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.039832 kernel: audit: type=1130 audit(1765789239.034:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.040895 systemd-modules-load[320]: Inserted module 'br_netfilter' Dec 15 09:00:39.048019 kernel: Bridge firewalling registered Dec 15 09:00:39.048045 kernel: audit: type=1130 audit(1765789239.042:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.042165 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 15 09:00:39.048770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 15 09:00:39.125613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 15 09:00:39.141164 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 15 09:00:39.145296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 15 09:00:39.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.152590 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 15 09:00:39.155785 kernel: audit: type=1130 audit(1765789239.146:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.159415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 15 09:00:39.167788 kernel: audit: type=1130 audit(1765789239.154:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.167855 kernel: audit: type=1130 audit(1765789239.160:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.162045 systemd-tmpfiles[340]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 15 09:00:39.164657 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 15 09:00:39.177000 audit: BPF prog-id=6 op=LOAD Dec 15 09:00:39.179538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 15 09:00:39.182227 kernel: audit: type=1334 audit(1765789239.177:8): prog-id=6 op=LOAD Dec 15 09:00:39.183651 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 15 09:00:39.190282 kernel: audit: type=1130 audit(1765789239.183:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.201709 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 15 09:00:39.209734 kernel: audit: type=1130 audit(1765789239.202:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.204487 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 15 09:00:39.233697 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d561a37bffac8b0c131876e06d146d4110f86fb5be729a7772a95bc6d82652ce Dec 15 09:00:39.244094 systemd-resolved[348]: Positive Trust Anchors: Dec 15 09:00:39.244111 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 15 09:00:39.244115 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 15 09:00:39.244147 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 15 09:00:39.275677 systemd-resolved[348]: Defaulting to hostname 'linux'. Dec 15 09:00:39.276876 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 15 09:00:39.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.278255 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 15 09:00:39.369851 kernel: Loading iSCSI transport class v2.0-870. Dec 15 09:00:39.383838 kernel: iscsi: registered transport (tcp) Dec 15 09:00:39.406998 kernel: iscsi: registered transport (qla4xxx) Dec 15 09:00:39.407037 kernel: QLogic iSCSI HBA Driver Dec 15 09:00:39.433861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 15 09:00:39.466490 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 15 09:00:39.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.467557 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 15 09:00:39.533056 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 15 09:00:39.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.535201 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 15 09:00:39.539595 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 15 09:00:39.579455 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 15 09:00:39.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.580000 audit: BPF prog-id=7 op=LOAD Dec 15 09:00:39.580000 audit: BPF prog-id=8 op=LOAD Dec 15 09:00:39.581754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 15 09:00:39.612292 systemd-udevd[594]: Using default interface naming scheme 'v257'. Dec 15 09:00:39.625693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 15 09:00:39.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.631571 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 15 09:00:39.661928 dracut-pre-trigger[656]: rd.md=0: removing MD RAID activation Dec 15 09:00:39.676432 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 15 09:00:39.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.681000 audit: BPF prog-id=9 op=LOAD Dec 15 09:00:39.682738 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 15 09:00:39.699424 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 15 09:00:39.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.702506 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 15 09:00:39.734020 systemd-networkd[723]: lo: Link UP Dec 15 09:00:39.734028 systemd-networkd[723]: lo: Gained carrier Dec 15 09:00:39.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.734608 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 15 09:00:39.735607 systemd[1]: Reached target network.target - Network. Dec 15 09:00:39.795919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 15 09:00:39.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.799861 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 15 09:00:39.841164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 15 09:00:39.867190 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 15 09:00:39.908843 kernel: cryptd: max_cpu_qlen set to 1000 Dec 15 09:00:39.920007 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 15 09:00:39.917802 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 15 09:00:39.923825 kernel: AES CTR mode by8 optimization enabled Dec 15 09:00:39.924728 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 15 09:00:39.924740 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 15 09:00:39.926125 systemd-networkd[723]: eth0: Link UP Dec 15 09:00:39.926317 systemd-networkd[723]: eth0: Gained carrier Dec 15 09:00:39.926326 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 15 09:00:39.932495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 15 09:00:39.947134 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 15 09:00:39.948889 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 15 09:00:39.951173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 15 09:00:39.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:39.951284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 15 09:00:39.955045 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 15 09:00:39.960612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 15 09:00:39.969943 disk-uuid[827]: Primary Header is updated. Dec 15 09:00:39.969943 disk-uuid[827]: Secondary Entries is updated. Dec 15 09:00:39.969943 disk-uuid[827]: Secondary Header is updated. Dec 15 09:00:40.056015 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 15 09:00:40.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:40.064999 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 15 09:00:40.067901 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 15 09:00:40.069787 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 15 09:00:40.072706 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 15 09:00:40.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:40.076742 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 15 09:00:40.095996 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 15 09:00:40.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.014300 disk-uuid[835]: Warning: The kernel is still using the old partition table. Dec 15 09:00:41.014300 disk-uuid[835]: The new table will be used at the next reboot or after you Dec 15 09:00:41.014300 disk-uuid[835]: run partprobe(8) or kpartx(8) Dec 15 09:00:41.014300 disk-uuid[835]: The operation has completed successfully. Dec 15 09:00:41.026567 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 15 09:00:41.026710 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 15 09:00:41.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.031527 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 15 09:00:41.072837 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (861) Dec 15 09:00:41.072889 kernel: BTRFS info (device vda6): first mount of filesystem 8d7009aa-41bd-4807-b2d3-c062d8bdb0eb Dec 15 09:00:41.075717 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 15 09:00:41.079813 kernel: BTRFS info (device vda6): turning on async discard Dec 15 09:00:41.079843 kernel: BTRFS info (device vda6): enabling free space tree Dec 15 09:00:41.087830 kernel: BTRFS info (device vda6): last unmount of filesystem 8d7009aa-41bd-4807-b2d3-c062d8bdb0eb Dec 15 09:00:41.088452 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 15 09:00:41.097917 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 15 09:00:41.097938 kernel: audit: type=1130 audit(1765789241.089:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.091243 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 15 09:00:41.205743 ignition[880]: Ignition 2.24.0 Dec 15 09:00:41.205755 ignition[880]: Stage: fetch-offline Dec 15 09:00:41.206070 ignition[880]: no configs at "/usr/lib/ignition/base.d" Dec 15 09:00:41.206084 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 15 09:00:41.206176 ignition[880]: parsed url from cmdline: "" Dec 15 09:00:41.206180 ignition[880]: no config URL provided Dec 15 09:00:41.206186 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Dec 15 09:00:41.206196 ignition[880]: no config at "/usr/lib/ignition/user.ign" Dec 15 09:00:41.206237 ignition[880]: op(1): [started] loading QEMU firmware config module Dec 15 09:00:41.206242 ignition[880]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 15 09:00:41.214411 ignition[880]: op(1): [finished] loading QEMU firmware config module Dec 15 09:00:41.299005 ignition[880]: parsing config with SHA512: 4a8d1687f0d1e250e51bd82227e9a452f7018b1d3893d4c3e8951512655edf80b8c37131972d37fc0df20b01699b3d00e2b3deed602f487294b2dc6018cca7fe Dec 15 09:00:41.304195 unknown[880]: fetched base config from "system" Dec 15 09:00:41.304212 unknown[880]: fetched user config from "qemu" Dec 15 09:00:41.304549 ignition[880]: fetch-offline: fetch-offline passed Dec 15 09:00:41.304602 ignition[880]: Ignition finished successfully Dec 15 09:00:41.311939 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 15 09:00:41.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.315850 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 15 09:00:41.322298 kernel: audit: type=1130 audit(1765789241.314:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.316941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 15 09:00:41.345230 ignition[891]: Ignition 2.24.0 Dec 15 09:00:41.345245 ignition[891]: Stage: kargs Dec 15 09:00:41.345393 ignition[891]: no configs at "/usr/lib/ignition/base.d" Dec 15 09:00:41.345403 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 15 09:00:41.346054 ignition[891]: kargs: kargs passed Dec 15 09:00:41.346102 ignition[891]: Ignition finished successfully Dec 15 09:00:41.352334 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 15 09:00:41.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.357110 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 15 09:00:41.362235 kernel: audit: type=1130 audit(1765789241.354:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.387715 ignition[897]: Ignition 2.24.0 Dec 15 09:00:41.387726 ignition[897]: Stage: disks Dec 15 09:00:41.387863 ignition[897]: no configs at "/usr/lib/ignition/base.d" Dec 15 09:00:41.387872 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 15 09:00:41.388474 ignition[897]: disks: disks passed Dec 15 09:00:41.402732 kernel: audit: type=1130 audit(1765789241.392:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.392375 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 15 09:00:41.388517 ignition[897]: Ignition finished successfully Dec 15 09:00:41.393722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 15 09:00:41.400710 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 15 09:00:41.402707 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 15 09:00:41.403313 systemd[1]: Reached target sysinit.target - System Initialization. Dec 15 09:00:41.403880 systemd[1]: Reached target basic.target - Basic System. Dec 15 09:00:41.413673 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 15 09:00:41.464312 systemd-fsck[905]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 15 09:00:41.471875 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 15 09:00:41.480114 kernel: audit: type=1130 audit(1765789241.473:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.479165 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 15 09:00:41.482542 systemd-networkd[723]: eth0: Gained IPv6LL Dec 15 09:00:41.587854 kernel: EXT4-fs (vda9): mounted filesystem 0249e8e3-9253-419b-b3bb-5aef252bada0 r/w with ordered data mode. Quota mode: none. Dec 15 09:00:41.588748 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 15 09:00:41.589937 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 15 09:00:41.593323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 15 09:00:41.618147 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 15 09:00:41.619512 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 15 09:00:41.627880 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Dec 15 09:00:41.619555 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 15 09:00:41.619583 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 15 09:00:41.636377 kernel: BTRFS info (device vda6): first mount of filesystem 8d7009aa-41bd-4807-b2d3-c062d8bdb0eb Dec 15 09:00:41.636399 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 15 09:00:41.638866 kernel: BTRFS info (device vda6): turning on async discard Dec 15 09:00:41.638892 kernel: BTRFS info (device vda6): enabling free space tree Dec 15 09:00:41.640083 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 15 09:00:41.660028 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 15 09:00:41.662138 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 15 09:00:41.839323 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 15 09:00:41.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.844023 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 15 09:00:41.850678 kernel: audit: type=1130 audit(1765789241.841:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.848483 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 15 09:00:41.862829 kernel: BTRFS info (device vda6): last unmount of filesystem 8d7009aa-41bd-4807-b2d3-c062d8bdb0eb Dec 15 09:00:41.880013 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 15 09:00:41.886253 kernel: audit: type=1130 audit(1765789241.879:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.895069 ignition[1011]: INFO : Ignition 2.24.0 Dec 15 09:00:41.895069 ignition[1011]: INFO : Stage: mount Dec 15 09:00:41.897515 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 15 09:00:41.897515 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 15 09:00:41.897515 ignition[1011]: INFO : mount: mount passed Dec 15 09:00:41.897515 ignition[1011]: INFO : Ignition finished successfully Dec 15 09:00:41.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.900284 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 15 09:00:41.911267 kernel: audit: type=1130 audit(1765789241.902:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:41.905034 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 15 09:00:42.063774 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 15 09:00:42.065594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 15 09:00:42.095821 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Dec 15 09:00:42.099117 kernel: BTRFS info (device vda6): first mount of filesystem 8d7009aa-41bd-4807-b2d3-c062d8bdb0eb Dec 15 09:00:42.099142 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 15 09:00:42.102710 kernel: BTRFS info (device vda6): turning on async discard Dec 15 09:00:42.102726 kernel: BTRFS info (device vda6): enabling free space tree Dec 15 09:00:42.104262 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 15 09:00:42.137560 ignition[1039]: INFO : Ignition 2.24.0 Dec 15 09:00:42.137560 ignition[1039]: INFO : Stage: files Dec 15 09:00:42.140382 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 15 09:00:42.140382 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 15 09:00:42.140382 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Dec 15 09:00:42.140382 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 15 09:00:42.140382 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 15 09:00:42.151248 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 15 09:00:42.151248 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 15 09:00:42.151248 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 15 09:00:42.151248 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 15 09:00:42.151248 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 15 09:00:42.144768 unknown[1039]: wrote ssh authorized keys file for user: core Dec 15 09:00:42.195490 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 15 09:00:42.255383 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 15 09:00:42.258974 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 15 09:00:42.258974 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 15 09:00:42.258974 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 15 09:00:42.258974 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 15 09:00:42.258974 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 15 09:00:42.258974 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 15 09:00:42.258974 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 15 09:00:42.258974 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 15 09:00:42.283784 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 15 09:00:42.283784 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 15 09:00:42.283784 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 15 09:00:42.283784 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 15 09:00:42.283784 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 15 09:00:42.283784 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 15 09:00:42.734822 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 15 09:00:43.248199 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 15 09:00:43.248199 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 15 09:00:43.255406 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 15 09:00:43.255406 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 15 09:00:43.255406 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 15 09:00:43.255406 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 15 09:00:43.255406 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 15 09:00:43.255406 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 15 09:00:43.255406 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 15 09:00:43.255406 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 15 09:00:43.283610 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 15 09:00:43.287855 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 15 09:00:43.290598 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 15 09:00:43.290598 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 15 09:00:43.290598 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 15 09:00:43.290598 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 15 09:00:43.290598 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 15 09:00:43.290598 ignition[1039]: INFO : files: files passed Dec 15 09:00:43.290598 ignition[1039]: INFO : Ignition finished successfully Dec 15 09:00:43.315493 kernel: audit: type=1130 audit(1765789243.305:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.304138 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 15 09:00:43.307795 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 15 09:00:43.313740 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 15 09:00:43.334438 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 15 09:00:43.342688 kernel: audit: type=1130 audit(1765789243.335:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.334583 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 15 09:00:43.349765 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Dec 15 09:00:43.357324 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 15 09:00:43.357324 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 15 09:00:43.363150 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 15 09:00:43.368489 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 15 09:00:43.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.369659 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 15 09:00:43.374675 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 15 09:00:43.470487 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 15 09:00:43.470619 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 15 09:00:43.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.474257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 15 09:00:43.477627 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 15 09:00:43.479573 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 15 09:00:43.480412 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 15 09:00:43.511734 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 15 09:00:43.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.513827 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 15 09:00:43.545674 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 15 09:00:43.545815 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 15 09:00:43.546710 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 15 09:00:43.551724 systemd[1]: Stopped target timers.target - Timer Units. Dec 15 09:00:43.555246 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 15 09:00:43.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.555366 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 15 09:00:43.560682 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 15 09:00:43.561560 systemd[1]: Stopped target basic.target - Basic System. Dec 15 09:00:43.568096 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 15 09:00:43.568852 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 15 09:00:43.572397 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 15 09:00:43.575594 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 15 09:00:43.579212 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 15 09:00:43.582291 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 15 09:00:43.585388 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 15 09:00:43.589400 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 15 09:00:43.592460 systemd[1]: Stopped target swap.target - Swaps. Dec 15 09:00:43.595419 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 15 09:00:43.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.595547 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 15 09:00:43.600098 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 15 09:00:43.603377 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 15 09:00:43.606752 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 15 09:00:43.606910 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 15 09:00:43.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.607632 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 15 09:00:43.607761 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 15 09:00:43.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.615658 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 15 09:00:43.615785 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 15 09:00:43.616631 systemd[1]: Stopped target paths.target - Path Units. Dec 15 09:00:43.621224 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 15 09:00:43.624880 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 15 09:00:43.625720 systemd[1]: Stopped target slices.target - Slice Units. Dec 15 09:00:43.628904 systemd[1]: Stopped target sockets.target - Socket Units. Dec 15 09:00:43.631874 systemd[1]: iscsid.socket: Deactivated successfully. Dec 15 09:00:43.631987 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 15 09:00:43.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.635557 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 15 09:00:43.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.635661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 15 09:00:43.638339 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 15 09:00:43.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.638433 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 15 09:00:43.641266 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 15 09:00:43.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.641406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 15 09:00:43.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.644454 systemd[1]: ignition-files.service: Deactivated successfully. Dec 15 09:00:43.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.644574 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 15 09:00:43.648254 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 15 09:00:43.650213 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 15 09:00:43.650376 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 15 09:00:43.654359 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 15 09:00:43.656624 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 15 09:00:43.656762 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 15 09:00:43.659641 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 15 09:00:43.659761 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 15 09:00:43.663284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 15 09:00:43.663395 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 15 09:00:43.688979 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 15 09:00:43.689150 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 15 09:00:43.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.701486 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 15 09:00:43.703021 ignition[1096]: INFO : Ignition 2.24.0 Dec 15 09:00:43.703021 ignition[1096]: INFO : Stage: umount Dec 15 09:00:43.705873 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 15 09:00:43.705873 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 15 09:00:43.705873 ignition[1096]: INFO : umount: umount passed Dec 15 09:00:43.705873 ignition[1096]: INFO : Ignition finished successfully Dec 15 09:00:43.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.706008 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 15 09:00:43.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.706156 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 15 09:00:43.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.707846 systemd[1]: Stopped target network.target - Network. Dec 15 09:00:43.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.711555 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 15 09:00:43.711620 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 15 09:00:43.714219 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 15 09:00:43.714278 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 15 09:00:43.717253 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 15 09:00:43.717318 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 15 09:00:43.721076 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 15 09:00:43.721140 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 15 09:00:43.722355 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 15 09:00:43.728364 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 15 09:00:43.744524 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 15 09:00:43.744670 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 15 09:00:43.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.750508 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 15 09:00:43.750627 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 15 09:00:43.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.758130 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 15 09:00:43.757000 audit: BPF prog-id=6 op=UNLOAD Dec 15 09:00:43.761692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 15 09:00:43.761000 audit: BPF prog-id=9 op=UNLOAD Dec 15 09:00:43.761746 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 15 09:00:43.763152 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 15 09:00:43.768132 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 15 09:00:43.769707 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 15 09:00:43.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.775316 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 15 09:00:43.776756 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 15 09:00:43.778632 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 15 09:00:43.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.778688 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 15 09:00:43.779457 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 15 09:00:43.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.784179 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 15 09:00:43.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.784279 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 15 09:00:43.787894 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 15 09:00:43.787984 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 15 09:00:43.811762 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 15 09:00:43.811968 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 15 09:00:43.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.813070 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 15 09:00:43.813119 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 15 09:00:43.817743 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 15 09:00:43.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.817784 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 15 09:00:43.821133 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 15 09:00:43.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.821190 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 15 09:00:43.826573 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 15 09:00:43.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.826652 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 15 09:00:43.831176 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 15 09:00:43.831254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 15 09:00:43.841331 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 15 09:00:43.842196 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 15 09:00:43.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.842249 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 15 09:00:43.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.845370 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 15 09:00:43.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.845420 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 15 09:00:43.848853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 15 09:00:43.848902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 15 09:00:43.853717 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 15 09:00:43.862411 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 15 09:00:43.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.872324 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 15 09:00:43.872435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 15 09:00:43.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:43.875968 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 15 09:00:43.879792 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 15 09:00:43.915687 systemd[1]: Switching root. Dec 15 09:00:43.951578 systemd-journald[317]: Journal stopped Dec 15 09:00:45.343475 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Dec 15 09:00:45.343546 kernel: SELinux: policy capability network_peer_controls=1 Dec 15 09:00:45.343570 kernel: SELinux: policy capability open_perms=1 Dec 15 09:00:45.343588 kernel: SELinux: policy capability extended_socket_class=1 Dec 15 09:00:45.343601 kernel: SELinux: policy capability always_check_network=0 Dec 15 09:00:45.343618 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 15 09:00:45.343630 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 15 09:00:45.343647 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 15 09:00:45.343659 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 15 09:00:45.343678 kernel: SELinux: policy capability userspace_initial_context=0 Dec 15 09:00:45.343696 systemd[1]: Successfully loaded SELinux policy in 63.536ms. Dec 15 09:00:45.343720 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.494ms. Dec 15 09:00:45.343735 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 15 09:00:45.343748 systemd[1]: Detected virtualization kvm. Dec 15 09:00:45.343761 systemd[1]: Detected architecture x86-64. Dec 15 09:00:45.343774 systemd[1]: Detected first boot. Dec 15 09:00:45.343789 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 15 09:00:45.343801 zram_generator::config[1140]: No configuration found. Dec 15 09:00:45.343830 kernel: Guest personality initialized and is inactive Dec 15 09:00:45.343851 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 15 09:00:45.343863 kernel: Initialized host personality Dec 15 09:00:45.343875 kernel: NET: Registered PF_VSOCK protocol family Dec 15 09:00:45.343887 systemd[1]: Populated /etc with preset unit settings. Dec 15 09:00:45.343904 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 15 09:00:45.343922 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 15 09:00:45.343940 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 15 09:00:45.343965 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 15 09:00:45.343984 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 15 09:00:45.344001 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 15 09:00:45.344016 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 15 09:00:45.344032 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 15 09:00:45.344045 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 15 09:00:45.344058 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 15 09:00:45.344070 systemd[1]: Created slice user.slice - User and Session Slice. Dec 15 09:00:45.344084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 15 09:00:45.344097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 15 09:00:45.344110 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 15 09:00:45.344125 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 15 09:00:45.344140 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 15 09:00:45.344154 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 15 09:00:45.344167 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 15 09:00:45.344180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 15 09:00:45.344193 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 15 09:00:45.345886 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 15 09:00:45.345903 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 15 09:00:45.345916 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 15 09:00:45.345931 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 15 09:00:45.345944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 15 09:00:45.345957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 15 09:00:45.345970 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 15 09:00:45.345985 systemd[1]: Reached target slices.target - Slice Units. Dec 15 09:00:45.345997 systemd[1]: Reached target swap.target - Swaps. Dec 15 09:00:45.346010 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 15 09:00:45.346023 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 15 09:00:45.346036 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 15 09:00:45.346049 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 15 09:00:45.346062 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 15 09:00:45.346076 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 15 09:00:45.346089 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 15 09:00:45.346102 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 15 09:00:45.346114 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 15 09:00:45.346127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 15 09:00:45.346142 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 15 09:00:45.346165 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 15 09:00:45.346190 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 15 09:00:45.346206 systemd[1]: Mounting media.mount - External Media Directory... Dec 15 09:00:45.346222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 15 09:00:45.346236 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 15 09:00:45.346248 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 15 09:00:45.346270 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 15 09:00:45.346284 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 15 09:00:45.346300 systemd[1]: Reached target machines.target - Containers. Dec 15 09:00:45.346313 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 15 09:00:45.346326 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 15 09:00:45.346339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 15 09:00:45.346351 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 15 09:00:45.346365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 15 09:00:45.346378 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 15 09:00:45.346392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 15 09:00:45.346405 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 15 09:00:45.346418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 15 09:00:45.346432 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 15 09:00:45.346444 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 15 09:00:45.346457 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 15 09:00:45.346470 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 15 09:00:45.346485 systemd[1]: Stopped systemd-fsck-usr.service. Dec 15 09:00:45.346500 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 15 09:00:45.346518 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 15 09:00:45.346538 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 15 09:00:45.346551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 15 09:00:45.346564 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 15 09:00:45.346577 kernel: ACPI: bus type drm_connector registered Dec 15 09:00:45.346590 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 15 09:00:45.346602 kernel: fuse: init (API version 7.41) Dec 15 09:00:45.346614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 15 09:00:45.346628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 15 09:00:45.346643 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 15 09:00:45.346676 systemd-journald[1215]: Collecting audit messages is enabled. Dec 15 09:00:45.346702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 15 09:00:45.346715 systemd[1]: Mounted media.mount - External Media Directory. Dec 15 09:00:45.346727 systemd-journald[1215]: Journal started Dec 15 09:00:45.346752 systemd-journald[1215]: Runtime Journal (/run/log/journal/09cfc80ec1c14eeca9f5d3fd77cb1371) is 6M, max 48.2M, 42.1M free. Dec 15 09:00:45.150000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 15 09:00:45.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.292000 audit: BPF prog-id=14 op=UNLOAD Dec 15 09:00:45.292000 audit: BPF prog-id=13 op=UNLOAD Dec 15 09:00:45.294000 audit: BPF prog-id=15 op=LOAD Dec 15 09:00:45.294000 audit: BPF prog-id=16 op=LOAD Dec 15 09:00:45.294000 audit: BPF prog-id=17 op=LOAD Dec 15 09:00:45.339000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 15 09:00:45.339000 audit[1215]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe70427e30 a2=4000 a3=0 items=0 ppid=1 pid=1215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:45.339000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 15 09:00:45.023109 systemd[1]: Queued start job for default target multi-user.target. Dec 15 09:00:45.038637 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 15 09:00:45.039141 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 15 09:00:45.351920 systemd[1]: Started systemd-journald.service - Journal Service. Dec 15 09:00:45.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.353453 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 15 09:00:45.355268 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 15 09:00:45.357411 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 15 09:00:45.359446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 15 09:00:45.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.361905 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 15 09:00:45.362143 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 15 09:00:45.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.364787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 15 09:00:45.365135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 15 09:00:45.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.367577 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 15 09:00:45.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.369717 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 15 09:00:45.369989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 15 09:00:45.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.371970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 15 09:00:45.372181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 15 09:00:45.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.374430 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 15 09:00:45.374639 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 15 09:00:45.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.376649 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 15 09:00:45.377009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 15 09:00:45.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.379224 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 15 09:00:45.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.381415 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 15 09:00:45.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.384442 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 15 09:00:45.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.386943 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 15 09:00:45.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.402412 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 15 09:00:45.404799 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 15 09:00:45.408218 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 15 09:00:45.411162 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 15 09:00:45.412994 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 15 09:00:45.413087 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 15 09:00:45.415719 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 15 09:00:45.418224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 15 09:00:45.418360 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 15 09:00:45.422580 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 15 09:00:45.425584 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 15 09:00:45.427485 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 15 09:00:45.428572 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 15 09:00:45.430734 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 15 09:00:45.434285 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 15 09:00:45.444956 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 15 09:00:45.449976 systemd-journald[1215]: Time spent on flushing to /var/log/journal/09cfc80ec1c14eeca9f5d3fd77cb1371 is 27.833ms for 1095 entries. Dec 15 09:00:45.449976 systemd-journald[1215]: System Journal (/var/log/journal/09cfc80ec1c14eeca9f5d3fd77cb1371) is 8M, max 163.5M, 155.5M free. Dec 15 09:00:45.497452 systemd-journald[1215]: Received client request to flush runtime journal. Dec 15 09:00:45.497514 kernel: loop1: detected capacity change from 0 to 375256 Dec 15 09:00:45.497542 kernel: loop1: p1 p2 p3 Dec 15 09:00:45.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.449102 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 15 09:00:45.453508 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 15 09:00:45.456665 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 15 09:00:45.459639 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 15 09:00:45.464430 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 15 09:00:45.467049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 15 09:00:45.471738 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 15 09:00:45.477938 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 15 09:00:45.494647 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 15 09:00:45.499000 audit: BPF prog-id=18 op=LOAD Dec 15 09:00:45.500000 audit: BPF prog-id=19 op=LOAD Dec 15 09:00:45.500000 audit: BPF prog-id=20 op=LOAD Dec 15 09:00:45.501964 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 15 09:00:45.504000 audit: BPF prog-id=21 op=LOAD Dec 15 09:00:45.506141 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 15 09:00:45.512046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 15 09:00:45.514739 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 15 09:00:45.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.517985 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 15 09:00:45.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.523000 audit: BPF prog-id=22 op=LOAD Dec 15 09:00:45.523000 audit: BPF prog-id=23 op=LOAD Dec 15 09:00:45.523000 audit: BPF prog-id=24 op=LOAD Dec 15 09:00:45.527214 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 15 09:00:45.530841 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Dec 15 09:00:45.532000 audit: BPF prog-id=25 op=LOAD Dec 15 09:00:45.532000 audit: BPF prog-id=26 op=LOAD Dec 15 09:00:45.533000 audit: BPF prog-id=27 op=LOAD Dec 15 09:00:45.535067 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 15 09:00:45.550912 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Dec 15 09:00:45.551214 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Dec 15 09:00:45.555925 kernel: loop2: detected capacity change from 0 to 171112 Dec 15 09:00:45.557886 kernel: loop2: p1 p2 p3 Dec 15 09:00:45.564432 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 15 09:00:45.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.573080 systemd-nsresourced[1281]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 15 09:00:45.575303 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 15 09:00:45.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.583863 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Dec 15 09:00:45.587976 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 15 09:00:45.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.603844 kernel: loop3: detected capacity change from 0 to 229808 Dec 15 09:00:45.629343 kernel: loop4: detected capacity change from 0 to 375256 Dec 15 09:00:45.629404 kernel: loop4: p1 p2 p3 Dec 15 09:00:45.646483 systemd-oomd[1272]: No swap; memory pressure usage will be degraded Dec 15 09:00:45.647112 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 15 09:00:45.647148 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 15 09:00:45.647168 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 15 09:00:45.647641 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 15 09:00:45.648830 kernel: device-mapper: ioctl: error adding target to table Dec 15 09:00:45.648440 (sd-merge)[1301]: device-mapper: reload ioctl on 0daeb3002d39c8ddc5a5178f945ad8012dd2881fb83aebbd8743bc867f89d9e6-verity (253:1) failed: Invalid argument Dec 15 09:00:45.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.657859 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 15 09:00:45.664925 systemd-resolved[1273]: Positive Trust Anchors: Dec 15 09:00:45.664940 systemd-resolved[1273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 15 09:00:45.664945 systemd-resolved[1273]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 15 09:00:45.664976 systemd-resolved[1273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 15 09:00:45.668490 systemd-resolved[1273]: Defaulting to hostname 'linux'. Dec 15 09:00:45.669915 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 15 09:00:45.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.671869 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 15 09:00:45.950726 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 15 09:00:45.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:45.952000 audit: BPF prog-id=8 op=UNLOAD Dec 15 09:00:45.952000 audit: BPF prog-id=7 op=UNLOAD Dec 15 09:00:45.952000 audit: BPF prog-id=28 op=LOAD Dec 15 09:00:45.952000 audit: BPF prog-id=29 op=LOAD Dec 15 09:00:45.954390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 15 09:00:45.992604 systemd-udevd[1306]: Using default interface naming scheme 'v257'. Dec 15 09:00:46.011160 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 15 09:00:46.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.014000 audit: BPF prog-id=30 op=LOAD Dec 15 09:00:46.017938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 15 09:00:46.040561 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 15 09:00:46.055652 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 15 09:00:46.083629 systemd-networkd[1316]: lo: Link UP Dec 15 09:00:46.084003 systemd-networkd[1316]: lo: Gained carrier Dec 15 09:00:46.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.085185 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 15 09:00:46.089577 systemd-networkd[1316]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 15 09:00:46.089590 systemd-networkd[1316]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 15 09:00:46.090379 systemd-networkd[1316]: eth0: Link UP Dec 15 09:00:46.090528 systemd-networkd[1316]: eth0: Gained carrier Dec 15 09:00:46.090542 systemd-networkd[1316]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 15 09:00:46.092126 systemd[1]: Reached target network.target - Network. Dec 15 09:00:46.096966 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 15 09:00:46.100356 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 15 09:00:46.106867 systemd-networkd[1316]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 15 09:00:46.108016 kernel: mousedev: PS/2 mouse device common for all mice Dec 15 09:00:46.113845 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 15 09:00:46.118892 kernel: ACPI: button: Power Button [PWRF] Dec 15 09:00:46.118775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 15 09:00:46.122428 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 15 09:00:46.125212 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 15 09:00:46.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.128365 kernel: kauditd_printk_skb: 116 callbacks suppressed Dec 15 09:00:46.128415 kernel: audit: type=1130 audit(1765789246.126:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.138850 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 15 09:00:46.140577 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 15 09:00:46.151353 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 15 09:00:46.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.158829 kernel: audit: type=1130 audit(1765789246.152:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.229001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 15 09:00:46.306845 kernel: kvm_amd: TSC scaling supported Dec 15 09:00:46.306906 kernel: kvm_amd: Nested Virtualization enabled Dec 15 09:00:46.306921 kernel: kvm_amd: Nested Paging enabled Dec 15 09:00:46.306934 kernel: kvm_amd: LBR virtualization supported Dec 15 09:00:46.306970 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 15 09:00:46.306991 kernel: kvm_amd: Virtual GIF supported Dec 15 09:00:46.315843 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 15 09:00:46.319822 kernel: loop5: detected capacity change from 0 to 171112 Dec 15 09:00:46.320821 kernel: loop5: p1 p2 p3 Dec 15 09:00:46.330633 (sd-merge)[1301]: device-mapper: reload ioctl on b2d844f83f9497d231e7e3d8f1a9fb6abb8fdef0aead009b5c0ac3004592870d-verity (253:2) failed: Invalid argument Dec 15 09:00:46.330827 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 15 09:00:46.330881 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 15 09:00:46.330898 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 15 09:00:46.330912 kernel: device-mapper: ioctl: error adding target to table Dec 15 09:00:46.334839 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 15 09:00:46.344822 kernel: EDAC MC: Ver: 3.0.0 Dec 15 09:00:46.366838 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 15 09:00:46.368831 kernel: loop6: detected capacity change from 0 to 229808 Dec 15 09:00:46.373923 (sd-merge)[1301]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 15 09:00:46.377342 (sd-merge)[1301]: Merged extensions into '/usr'. Dec 15 09:00:46.381242 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Dec 15 09:00:46.381259 systemd[1]: Reloading... Dec 15 09:00:46.433882 zram_generator::config[1408]: No configuration found. Dec 15 09:00:46.666377 systemd[1]: Reloading finished in 284 ms. Dec 15 09:00:46.709593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 15 09:00:46.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.713118 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 15 09:00:46.715831 kernel: audit: type=1130 audit(1765789246.711:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.721833 kernel: audit: type=1130 audit(1765789246.717:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:46.729174 systemd[1]: Starting ensure-sysext.service... Dec 15 09:00:46.731645 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 15 09:00:46.733000 audit: BPF prog-id=31 op=LOAD Dec 15 09:00:46.733000 audit: BPF prog-id=22 op=UNLOAD Dec 15 09:00:46.736839 kernel: audit: type=1334 audit(1765789246.733:157): prog-id=31 op=LOAD Dec 15 09:00:46.736872 kernel: audit: type=1334 audit(1765789246.733:158): prog-id=22 op=UNLOAD Dec 15 09:00:46.736887 kernel: audit: type=1334 audit(1765789246.733:159): prog-id=32 op=LOAD Dec 15 09:00:46.736900 kernel: audit: type=1334 audit(1765789246.733:160): prog-id=33 op=LOAD Dec 15 09:00:46.736914 kernel: audit: type=1334 audit(1765789246.733:161): prog-id=23 op=UNLOAD Dec 15 09:00:46.736939 kernel: audit: type=1334 audit(1765789246.733:162): prog-id=24 op=UNLOAD Dec 15 09:00:46.733000 audit: BPF prog-id=32 op=LOAD Dec 15 09:00:46.733000 audit: BPF prog-id=33 op=LOAD Dec 15 09:00:46.733000 audit: BPF prog-id=23 op=UNLOAD Dec 15 09:00:46.733000 audit: BPF prog-id=24 op=UNLOAD Dec 15 09:00:46.734000 audit: BPF prog-id=34 op=LOAD Dec 15 09:00:46.734000 audit: BPF prog-id=15 op=UNLOAD Dec 15 09:00:46.734000 audit: BPF prog-id=35 op=LOAD Dec 15 09:00:46.734000 audit: BPF prog-id=36 op=LOAD Dec 15 09:00:46.734000 audit: BPF prog-id=16 op=UNLOAD Dec 15 09:00:46.734000 audit: BPF prog-id=17 op=UNLOAD Dec 15 09:00:46.735000 audit: BPF prog-id=37 op=LOAD Dec 15 09:00:46.736000 audit: BPF prog-id=21 op=UNLOAD Dec 15 09:00:46.736000 audit: BPF prog-id=38 op=LOAD Dec 15 09:00:46.736000 audit: BPF prog-id=39 op=LOAD Dec 15 09:00:46.736000 audit: BPF prog-id=28 op=UNLOAD Dec 15 09:00:46.736000 audit: BPF prog-id=29 op=UNLOAD Dec 15 09:00:46.737000 audit: BPF prog-id=40 op=LOAD Dec 15 09:00:46.737000 audit: BPF prog-id=25 op=UNLOAD Dec 15 09:00:46.737000 audit: BPF prog-id=41 op=LOAD Dec 15 09:00:46.737000 audit: BPF prog-id=42 op=LOAD Dec 15 09:00:46.737000 audit: BPF prog-id=26 op=UNLOAD Dec 15 09:00:46.737000 audit: BPF prog-id=27 op=UNLOAD Dec 15 09:00:46.738000 audit: BPF prog-id=43 op=LOAD Dec 15 09:00:46.738000 audit: BPF prog-id=18 op=UNLOAD Dec 15 09:00:46.738000 audit: BPF prog-id=44 op=LOAD Dec 15 09:00:46.738000 audit: BPF prog-id=45 op=LOAD Dec 15 09:00:46.738000 audit: BPF prog-id=19 op=UNLOAD Dec 15 09:00:46.738000 audit: BPF prog-id=20 op=UNLOAD Dec 15 09:00:46.739000 audit: BPF prog-id=46 op=LOAD Dec 15 09:00:46.739000 audit: BPF prog-id=30 op=UNLOAD Dec 15 09:00:46.754880 systemd[1]: Reload requested from client PID 1442 ('systemctl') (unit ensure-sysext.service)... Dec 15 09:00:46.754894 systemd[1]: Reloading... Dec 15 09:00:46.761444 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 15 09:00:46.761483 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 15 09:00:46.761778 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 15 09:00:46.763134 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Dec 15 09:00:46.763210 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Dec 15 09:00:46.768893 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Dec 15 09:00:46.768906 systemd-tmpfiles[1443]: Skipping /boot Dec 15 09:00:46.780399 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Dec 15 09:00:46.780414 systemd-tmpfiles[1443]: Skipping /boot Dec 15 09:00:46.814837 zram_generator::config[1480]: No configuration found. Dec 15 09:00:47.037688 systemd[1]: Reloading finished in 282 ms. Dec 15 09:00:47.063287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 15 09:00:47.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.067000 audit: BPF prog-id=47 op=LOAD Dec 15 09:00:47.068000 audit: BPF prog-id=34 op=UNLOAD Dec 15 09:00:47.068000 audit: BPF prog-id=48 op=LOAD Dec 15 09:00:47.068000 audit: BPF prog-id=49 op=LOAD Dec 15 09:00:47.068000 audit: BPF prog-id=35 op=UNLOAD Dec 15 09:00:47.068000 audit: BPF prog-id=36 op=UNLOAD Dec 15 09:00:47.068000 audit: BPF prog-id=50 op=LOAD Dec 15 09:00:47.068000 audit: BPF prog-id=37 op=UNLOAD Dec 15 09:00:47.069000 audit: BPF prog-id=51 op=LOAD Dec 15 09:00:47.069000 audit: BPF prog-id=31 op=UNLOAD Dec 15 09:00:47.069000 audit: BPF prog-id=52 op=LOAD Dec 15 09:00:47.069000 audit: BPF prog-id=53 op=LOAD Dec 15 09:00:47.069000 audit: BPF prog-id=32 op=UNLOAD Dec 15 09:00:47.069000 audit: BPF prog-id=33 op=UNLOAD Dec 15 09:00:47.070000 audit: BPF prog-id=54 op=LOAD Dec 15 09:00:47.070000 audit: BPF prog-id=40 op=UNLOAD Dec 15 09:00:47.070000 audit: BPF prog-id=55 op=LOAD Dec 15 09:00:47.070000 audit: BPF prog-id=56 op=LOAD Dec 15 09:00:47.070000 audit: BPF prog-id=41 op=UNLOAD Dec 15 09:00:47.070000 audit: BPF prog-id=42 op=UNLOAD Dec 15 09:00:47.072000 audit: BPF prog-id=57 op=LOAD Dec 15 09:00:47.087000 audit: BPF prog-id=46 op=UNLOAD Dec 15 09:00:47.087000 audit: BPF prog-id=58 op=LOAD Dec 15 09:00:47.087000 audit: BPF prog-id=59 op=LOAD Dec 15 09:00:47.087000 audit: BPF prog-id=38 op=UNLOAD Dec 15 09:00:47.087000 audit: BPF prog-id=39 op=UNLOAD Dec 15 09:00:47.089000 audit: BPF prog-id=60 op=LOAD Dec 15 09:00:47.089000 audit: BPF prog-id=43 op=UNLOAD Dec 15 09:00:47.089000 audit: BPF prog-id=61 op=LOAD Dec 15 09:00:47.089000 audit: BPF prog-id=62 op=LOAD Dec 15 09:00:47.089000 audit: BPF prog-id=44 op=UNLOAD Dec 15 09:00:47.089000 audit: BPF prog-id=45 op=UNLOAD Dec 15 09:00:47.101104 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 15 09:00:47.103748 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 15 09:00:47.116249 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 15 09:00:47.119903 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 15 09:00:47.123200 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 15 09:00:47.129594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 15 09:00:47.129761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 15 09:00:47.132991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 15 09:00:47.136081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 15 09:00:47.136000 audit[1520]: SYSTEM_BOOT pid=1520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.139319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 15 09:00:47.141248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 15 09:00:47.141434 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 15 09:00:47.141523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 15 09:00:47.141613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 15 09:00:47.149547 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 15 09:00:47.150280 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 15 09:00:47.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.156240 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 15 09:00:47.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.160097 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 15 09:00:47.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.174773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 15 09:00:47.175315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 15 09:00:47.176784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 15 09:00:47.182014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 15 09:00:47.183881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 15 09:00:47.184058 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 15 09:00:47.184154 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 15 09:00:47.184280 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 15 09:00:47.185320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 15 09:00:47.185571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 15 09:00:47.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.188244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 15 09:00:47.188480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 15 09:00:47.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:47.190000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 15 09:00:47.190000 audit[1548]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfbef6360 a2=420 a3=0 items=0 ppid=1515 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:47.190000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 15 09:00:47.192297 augenrules[1548]: No rules Dec 15 09:00:47.191348 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 15 09:00:47.192691 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 15 09:00:47.195292 systemd[1]: audit-rules.service: Deactivated successfully. Dec 15 09:00:47.195562 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 15 09:00:47.197976 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 15 09:00:47.198203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 15 09:00:47.204542 systemd[1]: Finished ensure-sysext.service. Dec 15 09:00:47.210494 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 15 09:00:47.210565 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 15 09:00:47.212519 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 15 09:00:47.215377 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 15 09:00:47.218467 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 15 09:00:47.280024 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 15 09:00:47.282380 systemd-timesyncd[1559]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 15 09:00:47.282426 systemd-timesyncd[1559]: Initial clock synchronization to Mon 2025-12-15 09:00:47.051773 UTC. Dec 15 09:00:47.282510 systemd[1]: Reached target time-set.target - System Time Set. Dec 15 09:00:47.502017 ldconfig[1517]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 15 09:00:47.507721 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 15 09:00:47.511162 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 15 09:00:47.533368 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 15 09:00:47.535558 systemd[1]: Reached target sysinit.target - System Initialization. Dec 15 09:00:47.537386 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 15 09:00:47.539361 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 15 09:00:47.541350 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 15 09:00:47.543306 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 15 09:00:47.545109 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 15 09:00:47.547119 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 15 09:00:47.549222 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 15 09:00:47.550952 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 15 09:00:47.552950 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 15 09:00:47.552986 systemd[1]: Reached target paths.target - Path Units. Dec 15 09:00:47.554426 systemd[1]: Reached target timers.target - Timer Units. Dec 15 09:00:47.556546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 15 09:00:47.559952 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 15 09:00:47.563552 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 15 09:00:47.565677 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 15 09:00:47.567672 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 15 09:00:47.572261 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 15 09:00:47.574154 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 15 09:00:47.576541 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 15 09:00:47.578985 systemd[1]: Reached target sockets.target - Socket Units. Dec 15 09:00:47.580519 systemd[1]: Reached target basic.target - Basic System. Dec 15 09:00:47.582022 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 15 09:00:47.582050 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 15 09:00:47.583084 systemd[1]: Starting containerd.service - containerd container runtime... Dec 15 09:00:47.585678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 15 09:00:47.588066 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 15 09:00:47.599043 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 15 09:00:47.601999 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 15 09:00:47.603753 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 15 09:00:47.604847 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 15 09:00:47.607494 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 15 09:00:47.608356 jq[1573]: false Dec 15 09:00:47.611883 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 15 09:00:47.614511 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 15 09:00:47.619409 extend-filesystems[1574]: Found /dev/vda6 Dec 15 09:00:47.617822 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 15 09:00:47.623701 extend-filesystems[1574]: Found /dev/vda9 Dec 15 09:00:47.625965 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 15 09:00:47.626286 extend-filesystems[1574]: Checking size of /dev/vda9 Dec 15 09:00:47.626607 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 15 09:00:47.629065 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Refreshing passwd entry cache Dec 15 09:00:47.627671 oslogin_cache_refresh[1575]: Refreshing passwd entry cache Dec 15 09:00:47.627084 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 15 09:00:47.630995 systemd[1]: Starting update-engine.service - Update Engine... Dec 15 09:00:47.634689 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 15 09:00:47.636838 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Failure getting users, quitting Dec 15 09:00:47.636838 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 15 09:00:47.636823 oslogin_cache_refresh[1575]: Failure getting users, quitting Dec 15 09:00:47.636847 oslogin_cache_refresh[1575]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 15 09:00:47.636951 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Refreshing group entry cache Dec 15 09:00:47.636915 oslogin_cache_refresh[1575]: Refreshing group entry cache Dec 15 09:00:47.641395 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Failure getting groups, quitting Dec 15 09:00:47.641395 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 15 09:00:47.641387 oslogin_cache_refresh[1575]: Failure getting groups, quitting Dec 15 09:00:47.641417 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 15 09:00:47.641400 oslogin_cache_refresh[1575]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 15 09:00:47.643846 extend-filesystems[1574]: Resized partition /dev/vda9 Dec 15 09:00:47.644034 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 15 09:00:47.644310 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 15 09:00:47.644627 systemd[1]: motdgen.service: Deactivated successfully. Dec 15 09:00:47.644888 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 15 09:00:47.646008 jq[1594]: true Dec 15 09:00:47.649902 extend-filesystems[1600]: resize2fs 1.47.3 (8-Jul-2025) Dec 15 09:00:47.650276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 15 09:00:47.650557 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 15 09:00:47.657174 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 15 09:00:47.657688 update_engine[1589]: I20251215 09:00:47.657608 1589 main.cc:92] Flatcar Update Engine starting Dec 15 09:00:47.662266 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 15 09:00:47.662657 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 15 09:00:47.668487 jq[1604]: true Dec 15 09:00:47.682839 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 15 09:00:47.718720 extend-filesystems[1600]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 15 09:00:47.718720 extend-filesystems[1600]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 15 09:00:47.718720 extend-filesystems[1600]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 15 09:00:47.729089 extend-filesystems[1574]: Resized filesystem in /dev/vda9 Dec 15 09:00:47.723078 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 15 09:00:47.723393 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 15 09:00:47.738036 tar[1602]: linux-amd64/LICENSE Dec 15 09:00:47.738289 tar[1602]: linux-amd64/helm Dec 15 09:00:47.757141 dbus-daemon[1571]: [system] SELinux support is enabled Dec 15 09:00:47.757398 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 15 09:00:47.763697 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 15 09:00:47.764697 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 15 09:00:47.766851 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 15 09:00:47.766925 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 15 09:00:47.770374 update_engine[1589]: I20251215 09:00:47.770229 1589 update_check_scheduler.cc:74] Next update check in 10m16s Dec 15 09:00:47.770626 systemd[1]: Started update-engine.service - Update Engine. Dec 15 09:00:47.776075 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 15 09:00:47.777289 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Dec 15 09:00:47.780682 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (Power Button) Dec 15 09:00:47.780714 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 15 09:00:47.780760 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 15 09:00:47.781213 systemd-logind[1586]: New seat seat0. Dec 15 09:00:47.785727 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 15 09:00:47.791933 systemd[1]: Started systemd-logind.service - User Login Management. Dec 15 09:00:47.821630 sshd_keygen[1601]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 15 09:00:47.838684 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 15 09:00:47.852483 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 15 09:00:47.856883 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 15 09:00:47.881935 systemd-networkd[1316]: eth0: Gained IPv6LL Dec 15 09:00:47.882710 systemd[1]: issuegen.service: Deactivated successfully. Dec 15 09:00:47.883038 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 15 09:00:47.888094 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 15 09:00:47.890384 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 15 09:00:47.892683 systemd[1]: Reached target network-online.target - Network is Online. Dec 15 09:00:47.896991 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 15 09:00:47.905005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 15 09:00:47.909100 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 15 09:00:47.911870 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 15 09:00:47.931209 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 15 09:00:47.936111 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 15 09:00:47.938154 systemd[1]: Reached target getty.target - Login Prompts. Dec 15 09:00:47.951743 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 15 09:00:47.959119 containerd[1611]: time="2025-12-15T09:00:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 15 09:00:47.962444 containerd[1611]: time="2025-12-15T09:00:47.962048906Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 15 09:00:47.963864 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 15 09:00:47.964182 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 15 09:00:47.966900 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 15 09:00:47.977690 containerd[1611]: time="2025-12-15T09:00:47.977643692Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.66µs" Dec 15 09:00:47.977690 containerd[1611]: time="2025-12-15T09:00:47.977685270Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 15 09:00:47.977743 containerd[1611]: time="2025-12-15T09:00:47.977729984Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 15 09:00:47.977762 containerd[1611]: time="2025-12-15T09:00:47.977744852Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 15 09:00:47.977988 containerd[1611]: time="2025-12-15T09:00:47.977963252Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 15 09:00:47.978016 containerd[1611]: time="2025-12-15T09:00:47.977988429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 15 09:00:47.978077 containerd[1611]: time="2025-12-15T09:00:47.978053381Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 15 09:00:47.978077 containerd[1611]: time="2025-12-15T09:00:47.978071595Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 15 09:00:47.978333 containerd[1611]: time="2025-12-15T09:00:47.978308519Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 15 09:00:47.978333 containerd[1611]: time="2025-12-15T09:00:47.978328326Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 15 09:00:47.978383 containerd[1611]: time="2025-12-15T09:00:47.978340148Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 15 09:00:47.978383 containerd[1611]: time="2025-12-15T09:00:47.978349416Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 15 09:00:47.978736 containerd[1611]: time="2025-12-15T09:00:47.978707768Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 15 09:00:47.978839 containerd[1611]: time="2025-12-15T09:00:47.978817093Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 15 09:00:47.979063 containerd[1611]: time="2025-12-15T09:00:47.979040392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 15 09:00:47.979089 containerd[1611]: time="2025-12-15T09:00:47.979076018Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 15 09:00:47.979110 containerd[1611]: time="2025-12-15T09:00:47.979087039Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 15 09:00:47.979139 containerd[1611]: time="2025-12-15T09:00:47.979109251Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 15 09:00:47.981485 containerd[1611]: time="2025-12-15T09:00:47.981433229Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 15 09:00:47.981612 containerd[1611]: time="2025-12-15T09:00:47.981588510Z" level=info msg="metadata content store policy set" policy=shared Dec 15 09:00:47.988539 containerd[1611]: time="2025-12-15T09:00:47.988509770Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 15 09:00:47.988579 containerd[1611]: time="2025-12-15T09:00:47.988564012Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 15 09:00:47.988676 containerd[1611]: time="2025-12-15T09:00:47.988645885Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 15 09:00:47.988676 containerd[1611]: time="2025-12-15T09:00:47.988670672Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 15 09:00:47.988721 containerd[1611]: time="2025-12-15T09:00:47.988685530Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 15 09:00:47.988721 containerd[1611]: time="2025-12-15T09:00:47.988697873Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 15 09:00:47.988721 containerd[1611]: time="2025-12-15T09:00:47.988709444Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 15 09:00:47.988721 containerd[1611]: time="2025-12-15T09:00:47.988718882Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 15 09:00:47.988815 containerd[1611]: time="2025-12-15T09:00:47.988734521Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 15 09:00:47.988815 containerd[1611]: time="2025-12-15T09:00:47.988767263Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 15 09:00:47.988815 containerd[1611]: time="2025-12-15T09:00:47.988780077Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 15 09:00:47.988815 containerd[1611]: time="2025-12-15T09:00:47.988791037Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 15 09:00:47.988815 containerd[1611]: time="2025-12-15T09:00:47.988812438Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 15 09:00:47.988915 containerd[1611]: time="2025-12-15T09:00:47.988825913Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.988946980Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.988970955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.988984741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.988994629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989006502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989018204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989038602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989055894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989066644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989077485Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989087203Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989243937Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989355957Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 15 09:00:47.989372 containerd[1611]: time="2025-12-15T09:00:47.989369723Z" level=info msg="Start snapshots syncer" Dec 15 09:00:47.990767 containerd[1611]: time="2025-12-15T09:00:47.990675792Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 15 09:00:47.991038 containerd[1611]: time="2025-12-15T09:00:47.990992366Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 15 09:00:47.991149 containerd[1611]: time="2025-12-15T09:00:47.991050274Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 15 09:00:47.992417 containerd[1611]: time="2025-12-15T09:00:47.992382583Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 15 09:00:47.992518 containerd[1611]: time="2025-12-15T09:00:47.992495665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 15 09:00:47.992544 containerd[1611]: time="2025-12-15T09:00:47.992524088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 15 09:00:47.992544 containerd[1611]: time="2025-12-15T09:00:47.992535860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 15 09:00:47.992590 containerd[1611]: time="2025-12-15T09:00:47.992545238Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 15 09:00:47.992590 containerd[1611]: time="2025-12-15T09:00:47.992565145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 15 09:00:47.992590 containerd[1611]: time="2025-12-15T09:00:47.992576667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 15 09:00:47.992590 containerd[1611]: time="2025-12-15T09:00:47.992588028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 15 09:00:47.992660 containerd[1611]: time="2025-12-15T09:00:47.992599099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 15 09:00:47.992660 containerd[1611]: time="2025-12-15T09:00:47.992615560Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 15 09:00:47.992660 containerd[1611]: time="2025-12-15T09:00:47.992636359Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 15 09:00:47.992660 containerd[1611]: time="2025-12-15T09:00:47.992647840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 15 09:00:47.992660 containerd[1611]: time="2025-12-15T09:00:47.992659883Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 15 09:00:47.992752 containerd[1611]: time="2025-12-15T09:00:47.992676664Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 15 09:00:47.992752 containerd[1611]: time="2025-12-15T09:00:47.992685681Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 15 09:00:47.992752 containerd[1611]: time="2025-12-15T09:00:47.992695309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 15 09:00:47.992752 containerd[1611]: time="2025-12-15T09:00:47.992704847Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 15 09:00:47.992752 containerd[1611]: time="2025-12-15T09:00:47.992724805Z" level=info msg="runtime interface created" Dec 15 09:00:47.992752 containerd[1611]: time="2025-12-15T09:00:47.992730986Z" level=info msg="created NRI interface" Dec 15 09:00:47.992752 containerd[1611]: time="2025-12-15T09:00:47.992739091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 15 09:00:47.992752 containerd[1611]: time="2025-12-15T09:00:47.992749000Z" level=info msg="Connect containerd service" Dec 15 09:00:47.992909 containerd[1611]: time="2025-12-15T09:00:47.992770550Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 15 09:00:47.994170 containerd[1611]: time="2025-12-15T09:00:47.994139818Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 15 09:00:48.089774 tar[1602]: linux-amd64/README.md Dec 15 09:00:48.112401 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 15 09:00:48.131898 containerd[1611]: time="2025-12-15T09:00:48.131773442Z" level=info msg="Start subscribing containerd event" Dec 15 09:00:48.132030 containerd[1611]: time="2025-12-15T09:00:48.131998061Z" level=info msg="Start recovering state" Dec 15 09:00:48.132244 containerd[1611]: time="2025-12-15T09:00:48.132195290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 15 09:00:48.132308 containerd[1611]: time="2025-12-15T09:00:48.132278949Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 15 09:00:48.133630 containerd[1611]: time="2025-12-15T09:00:48.133306435Z" level=info msg="Start event monitor" Dec 15 09:00:48.133630 containerd[1611]: time="2025-12-15T09:00:48.133338223Z" level=info msg="Start cni network conf syncer for default" Dec 15 09:00:48.133630 containerd[1611]: time="2025-12-15T09:00:48.133381775Z" level=info msg="Start streaming server" Dec 15 09:00:48.133862 containerd[1611]: time="2025-12-15T09:00:48.133749212Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 15 09:00:48.133924 containerd[1611]: time="2025-12-15T09:00:48.133911247Z" level=info msg="runtime interface starting up..." Dec 15 09:00:48.133969 containerd[1611]: time="2025-12-15T09:00:48.133958963Z" level=info msg="starting plugins..." Dec 15 09:00:48.134028 containerd[1611]: time="2025-12-15T09:00:48.134012391Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 15 09:00:48.134723 containerd[1611]: time="2025-12-15T09:00:48.134194440Z" level=info msg="containerd successfully booted in 0.175345s" Dec 15 09:00:48.134359 systemd[1]: Started containerd.service - containerd container runtime. Dec 15 09:00:48.671496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 15 09:00:48.673761 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 15 09:00:48.675741 systemd[1]: Startup finished in 2.685s (kernel) + 5.812s (initrd) + 4.260s (userspace) = 12.757s. Dec 15 09:00:48.682102 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 15 09:00:49.077693 kubelet[1711]: E1215 09:00:49.077658 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 15 09:00:49.081663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 15 09:00:49.081878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 15 09:00:49.082260 systemd[1]: kubelet.service: Consumed 959ms CPU time, 264.8M memory peak. Dec 15 09:00:50.370983 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 15 09:00:50.372149 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:37378.service - OpenSSH per-connection server daemon (10.0.0.1:37378). Dec 15 09:00:50.451373 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 37378 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:00:50.453610 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:00:50.459904 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 15 09:00:50.460976 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 15 09:00:50.465047 systemd-logind[1586]: New session 1 of user core. Dec 15 09:00:50.480235 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 15 09:00:50.483055 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 15 09:00:50.502569 (systemd)[1730]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:00:50.505074 systemd-logind[1586]: New session 2 of user core. Dec 15 09:00:50.658590 systemd[1730]: Queued start job for default target default.target. Dec 15 09:00:50.677984 systemd[1730]: Created slice app.slice - User Application Slice. Dec 15 09:00:50.678011 systemd[1730]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 15 09:00:50.678025 systemd[1730]: Reached target paths.target - Paths. Dec 15 09:00:50.678065 systemd[1730]: Reached target timers.target - Timers. Dec 15 09:00:50.679434 systemd[1730]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 15 09:00:50.680309 systemd[1730]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 15 09:00:50.691108 systemd[1730]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 15 09:00:50.691508 systemd[1730]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 15 09:00:50.691659 systemd[1730]: Reached target sockets.target - Sockets. Dec 15 09:00:50.691701 systemd[1730]: Reached target basic.target - Basic System. Dec 15 09:00:50.691747 systemd[1730]: Reached target default.target - Main User Target. Dec 15 09:00:50.691790 systemd[1730]: Startup finished in 180ms. Dec 15 09:00:50.692318 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 15 09:00:50.694076 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 15 09:00:50.705112 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:37394.service - OpenSSH per-connection server daemon (10.0.0.1:37394). Dec 15 09:00:50.762375 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 37394 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:00:50.763857 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:00:50.768082 systemd-logind[1586]: New session 3 of user core. Dec 15 09:00:50.777921 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 15 09:00:50.789299 sshd[1748]: Connection closed by 10.0.0.1 port 37394 Dec 15 09:00:50.789760 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 15 09:00:50.799305 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:37394.service: Deactivated successfully. Dec 15 09:00:50.801177 systemd[1]: session-3.scope: Deactivated successfully. Dec 15 09:00:50.801902 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Dec 15 09:00:50.804497 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:37410.service - OpenSSH per-connection server daemon (10.0.0.1:37410). Dec 15 09:00:50.804987 systemd-logind[1586]: Removed session 3. Dec 15 09:00:50.859405 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 37410 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:00:50.861030 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:00:50.865291 systemd-logind[1586]: New session 4 of user core. Dec 15 09:00:50.874941 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 15 09:00:50.882851 sshd[1758]: Connection closed by 10.0.0.1 port 37410 Dec 15 09:00:50.883091 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Dec 15 09:00:50.893184 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:37410.service: Deactivated successfully. Dec 15 09:00:50.895013 systemd[1]: session-4.scope: Deactivated successfully. Dec 15 09:00:50.895729 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Dec 15 09:00:50.898339 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:37416.service - OpenSSH per-connection server daemon (10.0.0.1:37416). Dec 15 09:00:50.898895 systemd-logind[1586]: Removed session 4. Dec 15 09:00:50.957921 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 37416 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:00:50.959332 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:00:50.963447 systemd-logind[1586]: New session 5 of user core. Dec 15 09:00:50.973936 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 15 09:00:50.985870 sshd[1769]: Connection closed by 10.0.0.1 port 37416 Dec 15 09:00:50.986155 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Dec 15 09:00:50.999291 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:37416.service: Deactivated successfully. Dec 15 09:00:51.001041 systemd[1]: session-5.scope: Deactivated successfully. Dec 15 09:00:51.001755 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Dec 15 09:00:51.004341 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:37426.service - OpenSSH per-connection server daemon (10.0.0.1:37426). Dec 15 09:00:51.004922 systemd-logind[1586]: Removed session 5. Dec 15 09:00:51.061157 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 37426 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:00:51.062578 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:00:51.066556 systemd-logind[1586]: New session 6 of user core. Dec 15 09:00:51.075949 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 15 09:00:51.095122 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 15 09:00:51.095461 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 15 09:00:51.110390 sudo[1780]: pam_unix(sudo:session): session closed for user root Dec 15 09:00:51.111700 sshd[1779]: Connection closed by 10.0.0.1 port 37426 Dec 15 09:00:51.112050 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Dec 15 09:00:51.131254 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:37426.service: Deactivated successfully. Dec 15 09:00:51.133064 systemd[1]: session-6.scope: Deactivated successfully. Dec 15 09:00:51.133819 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Dec 15 09:00:51.136421 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:37436.service - OpenSSH per-connection server daemon (10.0.0.1:37436). Dec 15 09:00:51.137029 systemd-logind[1586]: Removed session 6. Dec 15 09:00:51.196270 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 37436 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:00:51.197882 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:00:51.202174 systemd-logind[1586]: New session 7 of user core. Dec 15 09:00:51.212924 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 15 09:00:51.227705 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 15 09:00:51.228150 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 15 09:00:51.233420 sudo[1793]: pam_unix(sudo:session): session closed for user root Dec 15 09:00:51.240884 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 15 09:00:51.241208 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 15 09:00:51.250188 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 15 09:00:51.299000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 15 09:00:51.301337 augenrules[1817]: No rules Dec 15 09:00:51.302983 kernel: kauditd_printk_skb: 71 callbacks suppressed Dec 15 09:00:51.303075 kernel: audit: type=1305 audit(1765789251.299:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 15 09:00:51.302915 systemd[1]: audit-rules.service: Deactivated successfully. Dec 15 09:00:51.303221 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 15 09:00:51.304293 sudo[1792]: pam_unix(sudo:session): session closed for user root Dec 15 09:00:51.299000 audit[1817]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7ffb52b0 a2=420 a3=0 items=0 ppid=1798 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:51.305578 sshd[1791]: Connection closed by 10.0.0.1 port 37436 Dec 15 09:00:51.305897 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Dec 15 09:00:51.309936 kernel: audit: type=1300 audit(1765789251.299:232): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7ffb52b0 a2=420 a3=0 items=0 ppid=1798 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:51.309989 kernel: audit: type=1327 audit(1765789251.299:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 15 09:00:51.299000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 15 09:00:51.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.315753 kernel: audit: type=1130 audit(1765789251.301:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.315773 kernel: audit: type=1131 audit(1765789251.301:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.319354 kernel: audit: type=1106 audit(1765789251.301:235): pid=1792 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.301000 audit[1792]: USER_END pid=1792 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.323315 kernel: audit: type=1104 audit(1765789251.301:236): pid=1792 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.301000 audit[1792]: CRED_DISP pid=1792 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.303000 audit[1787]: USER_END pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:00:51.332501 kernel: audit: type=1106 audit(1765789251.303:237): pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:00:51.332532 kernel: audit: type=1104 audit(1765789251.303:238): pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:00:51.303000 audit[1787]: CRED_DISP pid=1787 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:00:51.345331 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:37436.service: Deactivated successfully. Dec 15 09:00:51.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.128:22-10.0.0.1:37436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.347102 systemd[1]: session-7.scope: Deactivated successfully. Dec 15 09:00:51.347852 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Dec 15 09:00:51.349833 kernel: audit: type=1131 audit(1765789251.344:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.128:22-10.0.0.1:37436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.350576 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Dec 15 09:00:51.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.128:22-10.0.0.1:37452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.351188 systemd-logind[1586]: Removed session 7. Dec 15 09:00:51.395000 audit[1826]: USER_ACCT pid=1826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:00:51.396645 sshd[1826]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:00:51.396000 audit[1826]: CRED_ACQ pid=1826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:00:51.396000 audit[1826]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4c9b03a0 a2=3 a3=0 items=0 ppid=1 pid=1826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:51.396000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:00:51.398001 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:00:51.402453 systemd-logind[1586]: New session 8 of user core. Dec 15 09:00:51.415962 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 15 09:00:51.416000 audit[1826]: USER_START pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:00:51.418000 audit[1830]: CRED_ACQ pid=1830 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:00:51.428000 audit[1831]: USER_ACCT pid=1831 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.429040 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 15 09:00:51.428000 audit[1831]: CRED_REFR pid=1831 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.429380 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 15 09:00:51.428000 audit[1831]: USER_START pid=1831 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:00:51.869359 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 15 09:00:51.893089 (dockerd)[1852]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 15 09:00:52.516653 dockerd[1852]: time="2025-12-15T09:00:52.516584741Z" level=info msg="Starting up" Dec 15 09:00:52.621770 dockerd[1852]: time="2025-12-15T09:00:52.621716968Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 15 09:00:52.648987 dockerd[1852]: time="2025-12-15T09:00:52.648935259Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 15 09:00:53.649536 dockerd[1852]: time="2025-12-15T09:00:53.649474408Z" level=info msg="Loading containers: start." Dec 15 09:00:53.661839 kernel: Initializing XFRM netlink socket Dec 15 09:00:53.724000 audit[1906]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.724000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffce8721bc0 a2=0 a3=0 items=0 ppid=1852 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 15 09:00:53.727000 audit[1908]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.727000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffda506c8e0 a2=0 a3=0 items=0 ppid=1852 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.727000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 15 09:00:53.729000 audit[1910]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.729000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc56a6c7d0 a2=0 a3=0 items=0 ppid=1852 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 15 09:00:53.731000 audit[1912]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.731000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2dd42610 a2=0 a3=0 items=0 ppid=1852 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 15 09:00:53.734000 audit[1914]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.734000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe64a8ea40 a2=0 a3=0 items=0 ppid=1852 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 15 09:00:53.735000 audit[1916]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.735000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffee6a4b6d0 a2=0 a3=0 items=0 ppid=1852 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.735000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 15 09:00:53.737000 audit[1918]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.737000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcc5b809a0 a2=0 a3=0 items=0 ppid=1852 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 15 09:00:53.740000 audit[1920]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.740000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd482abd10 a2=0 a3=0 items=0 ppid=1852 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.740000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 15 09:00:53.827000 audit[1923]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.827000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff0f9bb620 a2=0 a3=0 items=0 ppid=1852 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 15 09:00:53.830000 audit[1925]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.830000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff1b99b2e0 a2=0 a3=0 items=0 ppid=1852 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 15 09:00:53.833000 audit[1927]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.833000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff44576800 a2=0 a3=0 items=0 ppid=1852 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.833000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 15 09:00:53.835000 audit[1929]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.835000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff7c405340 a2=0 a3=0 items=0 ppid=1852 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.835000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 15 09:00:53.837000 audit[1931]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.837000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffe655ccc0 a2=0 a3=0 items=0 ppid=1852 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 15 09:00:53.884000 audit[1961]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.884000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff1b072a20 a2=0 a3=0 items=0 ppid=1852 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 15 09:00:53.886000 audit[1963]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.886000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff943979b0 a2=0 a3=0 items=0 ppid=1852 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 15 09:00:53.888000 audit[1965]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.888000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb55f2920 a2=0 a3=0 items=0 ppid=1852 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 15 09:00:53.891000 audit[1967]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.891000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc31e3c300 a2=0 a3=0 items=0 ppid=1852 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 15 09:00:53.894000 audit[1969]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.894000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdcf520b30 a2=0 a3=0 items=0 ppid=1852 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 15 09:00:53.897000 audit[1971]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.897000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffec8e9baa0 a2=0 a3=0 items=0 ppid=1852 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 15 09:00:53.899000 audit[1973]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.899000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc236f2d30 a2=0 a3=0 items=0 ppid=1852 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 15 09:00:53.901000 audit[1975]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.901000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff4e72ce40 a2=0 a3=0 items=0 ppid=1852 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 15 09:00:53.904000 audit[1977]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.904000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffec189f7b0 a2=0 a3=0 items=0 ppid=1852 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.904000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 15 09:00:53.907000 audit[1979]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.907000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffb4de44b0 a2=0 a3=0 items=0 ppid=1852 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.907000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 15 09:00:53.909000 audit[1981]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.909000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffedabb9260 a2=0 a3=0 items=0 ppid=1852 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.909000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 15 09:00:53.911000 audit[1983]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.911000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffb7537b20 a2=0 a3=0 items=0 ppid=1852 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 15 09:00:53.913000 audit[1985]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.913000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe14a5e1a0 a2=0 a3=0 items=0 ppid=1852 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 15 09:00:53.920000 audit[1990]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.920000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcaa407cd0 a2=0 a3=0 items=0 ppid=1852 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 15 09:00:53.923000 audit[1992]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.923000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdacac9ec0 a2=0 a3=0 items=0 ppid=1852 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 15 09:00:53.925000 audit[1994]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.925000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff8508cb60 a2=0 a3=0 items=0 ppid=1852 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 15 09:00:53.927000 audit[1996]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.927000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcbf0715c0 a2=0 a3=0 items=0 ppid=1852 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 15 09:00:53.930000 audit[1998]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.930000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe3678e0d0 a2=0 a3=0 items=0 ppid=1852 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 15 09:00:53.932000 audit[2000]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:00:53.932000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe970b5a50 a2=0 a3=0 items=0 ppid=1852 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 15 09:00:53.951000 audit[2005]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.951000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc21a9e200 a2=0 a3=0 items=0 ppid=1852 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 15 09:00:53.954000 audit[2007]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.954000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe195b8da0 a2=0 a3=0 items=0 ppid=1852 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 15 09:00:53.964000 audit[2015]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.964000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff2f457fd0 a2=0 a3=0 items=0 ppid=1852 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 15 09:00:53.973000 audit[2021]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.973000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff1927d6f0 a2=0 a3=0 items=0 ppid=1852 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 15 09:00:53.976000 audit[2023]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.976000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd22e03260 a2=0 a3=0 items=0 ppid=1852 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 15 09:00:53.978000 audit[2025]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.978000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcfb905110 a2=0 a3=0 items=0 ppid=1852 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 15 09:00:53.981000 audit[2027]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.981000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd01e51b40 a2=0 a3=0 items=0 ppid=1852 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 15 09:00:53.983000 audit[2029]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:00:53.983000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd32d97490 a2=0 a3=0 items=0 ppid=1852 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:00:53.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 15 09:00:53.984696 systemd-networkd[1316]: docker0: Link UP Dec 15 09:00:53.990061 dockerd[1852]: time="2025-12-15T09:00:53.990005650Z" level=info msg="Loading containers: done." Dec 15 09:00:54.012381 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck312142803-merged.mount: Deactivated successfully. Dec 15 09:00:54.019263 dockerd[1852]: time="2025-12-15T09:00:54.019210547Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 15 09:00:54.019355 dockerd[1852]: time="2025-12-15T09:00:54.019321396Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 15 09:00:54.019438 dockerd[1852]: time="2025-12-15T09:00:54.019413424Z" level=info msg="Initializing buildkit" Dec 15 09:00:54.055373 dockerd[1852]: time="2025-12-15T09:00:54.055333077Z" level=info msg="Completed buildkit initialization" Dec 15 09:00:54.063439 dockerd[1852]: time="2025-12-15T09:00:54.063382749Z" level=info msg="Daemon has completed initialization" Dec 15 09:00:54.063546 dockerd[1852]: time="2025-12-15T09:00:54.063482591Z" level=info msg="API listen on /run/docker.sock" Dec 15 09:00:54.063747 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 15 09:00:54.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:54.923228 containerd[1611]: time="2025-12-15T09:00:54.923183010Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 15 09:00:55.727976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624271628.mount: Deactivated successfully. Dec 15 09:00:56.583114 containerd[1611]: time="2025-12-15T09:00:56.583053774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:00:56.583759 containerd[1611]: time="2025-12-15T09:00:56.583712444Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 15 09:00:56.584861 containerd[1611]: time="2025-12-15T09:00:56.584826265Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:00:56.587174 containerd[1611]: time="2025-12-15T09:00:56.587124860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:00:56.588234 containerd[1611]: time="2025-12-15T09:00:56.588184142Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.664959809s" Dec 15 09:00:56.588280 containerd[1611]: time="2025-12-15T09:00:56.588231122Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 15 09:00:56.588818 containerd[1611]: time="2025-12-15T09:00:56.588782006Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 15 09:00:58.033072 containerd[1611]: time="2025-12-15T09:00:58.033013772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:00:58.033758 containerd[1611]: time="2025-12-15T09:00:58.033733653Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 15 09:00:58.034932 containerd[1611]: time="2025-12-15T09:00:58.034882031Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:00:58.037480 containerd[1611]: time="2025-12-15T09:00:58.037431999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:00:58.038276 containerd[1611]: time="2025-12-15T09:00:58.038247560Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.449413966s" Dec 15 09:00:58.038318 containerd[1611]: time="2025-12-15T09:00:58.038279666Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 15 09:00:58.038801 containerd[1611]: time="2025-12-15T09:00:58.038778229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 15 09:00:59.332570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 15 09:00:59.334552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 15 09:00:59.551345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 15 09:00:59.552626 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 15 09:00:59.552848 kernel: audit: type=1130 audit(1765789259.550:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:59.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:00:59.557027 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 15 09:00:59.825917 kubelet[2142]: E1215 09:00:59.825763 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 15 09:00:59.833057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 15 09:00:59.833250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 15 09:00:59.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 15 09:00:59.834130 systemd[1]: kubelet.service: Consumed 458ms CPU time, 109M memory peak. Dec 15 09:00:59.837824 kernel: audit: type=1131 audit(1765789259.833:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 15 09:01:00.078416 containerd[1611]: time="2025-12-15T09:01:00.078296597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:00.079310 containerd[1611]: time="2025-12-15T09:01:00.079280747Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 15 09:01:00.080370 containerd[1611]: time="2025-12-15T09:01:00.080339294Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:00.082684 containerd[1611]: time="2025-12-15T09:01:00.082636824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:00.083537 containerd[1611]: time="2025-12-15T09:01:00.083506447Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.044700611s" Dec 15 09:01:00.083537 containerd[1611]: time="2025-12-15T09:01:00.083533998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 15 09:01:00.084128 containerd[1611]: time="2025-12-15T09:01:00.083982516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 15 09:01:01.072334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351868557.mount: Deactivated successfully. Dec 15 09:01:01.729918 containerd[1611]: time="2025-12-15T09:01:01.729854231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:01.730687 containerd[1611]: time="2025-12-15T09:01:01.730652122Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Dec 15 09:01:01.731814 containerd[1611]: time="2025-12-15T09:01:01.731769224Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:01.733990 containerd[1611]: time="2025-12-15T09:01:01.733962323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:01.734400 containerd[1611]: time="2025-12-15T09:01:01.734371825Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.650364052s" Dec 15 09:01:01.734430 containerd[1611]: time="2025-12-15T09:01:01.734398588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 15 09:01:01.735017 containerd[1611]: time="2025-12-15T09:01:01.734995875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 15 09:01:02.245129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498718042.mount: Deactivated successfully. Dec 15 09:01:02.899240 containerd[1611]: time="2025-12-15T09:01:02.899179398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:02.899939 containerd[1611]: time="2025-12-15T09:01:02.899882494Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 15 09:01:02.901011 containerd[1611]: time="2025-12-15T09:01:02.900967055Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:02.903613 containerd[1611]: time="2025-12-15T09:01:02.903578348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:02.904506 containerd[1611]: time="2025-12-15T09:01:02.904447476Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.169421131s" Dec 15 09:01:02.904506 containerd[1611]: time="2025-12-15T09:01:02.904496349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 15 09:01:02.905162 containerd[1611]: time="2025-12-15T09:01:02.905107921Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 15 09:01:03.443867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056521113.mount: Deactivated successfully. Dec 15 09:01:03.450160 containerd[1611]: time="2025-12-15T09:01:03.450091555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 15 09:01:03.450951 containerd[1611]: time="2025-12-15T09:01:03.450913863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 15 09:01:03.452136 containerd[1611]: time="2025-12-15T09:01:03.452104486Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 15 09:01:03.454837 containerd[1611]: time="2025-12-15T09:01:03.454775948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 15 09:01:03.455538 containerd[1611]: time="2025-12-15T09:01:03.455470763Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 550.320635ms" Dec 15 09:01:03.455538 containerd[1611]: time="2025-12-15T09:01:03.455512459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 15 09:01:03.456105 containerd[1611]: time="2025-12-15T09:01:03.456071669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 15 09:01:04.008146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507977226.mount: Deactivated successfully. Dec 15 09:01:06.413410 containerd[1611]: time="2025-12-15T09:01:06.413322652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:06.414406 containerd[1611]: time="2025-12-15T09:01:06.414340068Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Dec 15 09:01:06.415724 containerd[1611]: time="2025-12-15T09:01:06.415678089Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:06.418469 containerd[1611]: time="2025-12-15T09:01:06.418393043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:06.419364 containerd[1611]: time="2025-12-15T09:01:06.419336193Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.963238177s" Dec 15 09:01:06.419364 containerd[1611]: time="2025-12-15T09:01:06.419365561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 15 09:01:09.747380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 15 09:01:09.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:09.747609 systemd[1]: kubelet.service: Consumed 458ms CPU time, 109M memory peak. Dec 15 09:01:09.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:09.751251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 15 09:01:09.755018 kernel: audit: type=1130 audit(1765789269.746:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:09.755091 kernel: audit: type=1131 audit(1765789269.746:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:09.778904 systemd[1]: Reload requested from client PID 2306 ('systemctl') (unit session-8.scope)... Dec 15 09:01:09.778925 systemd[1]: Reloading... Dec 15 09:01:09.871970 zram_generator::config[2354]: No configuration found. Dec 15 09:01:10.247314 systemd[1]: Reloading finished in 468 ms. Dec 15 09:01:10.278573 kernel: audit: type=1334 audit(1765789270.275:294): prog-id=67 op=LOAD Dec 15 09:01:10.278633 kernel: audit: type=1334 audit(1765789270.275:295): prog-id=57 op=UNLOAD Dec 15 09:01:10.275000 audit: BPF prog-id=67 op=LOAD Dec 15 09:01:10.275000 audit: BPF prog-id=57 op=UNLOAD Dec 15 09:01:10.279996 kernel: audit: type=1334 audit(1765789270.275:296): prog-id=68 op=LOAD Dec 15 09:01:10.275000 audit: BPF prog-id=68 op=LOAD Dec 15 09:01:10.275000 audit: BPF prog-id=69 op=LOAD Dec 15 09:01:10.281308 kernel: audit: type=1334 audit(1765789270.275:297): prog-id=69 op=LOAD Dec 15 09:01:10.281354 kernel: audit: type=1334 audit(1765789270.275:298): prog-id=58 op=UNLOAD Dec 15 09:01:10.275000 audit: BPF prog-id=58 op=UNLOAD Dec 15 09:01:10.282546 kernel: audit: type=1334 audit(1765789270.275:299): prog-id=59 op=UNLOAD Dec 15 09:01:10.275000 audit: BPF prog-id=59 op=UNLOAD Dec 15 09:01:10.283787 kernel: audit: type=1334 audit(1765789270.277:300): prog-id=70 op=LOAD Dec 15 09:01:10.277000 audit: BPF prog-id=70 op=LOAD Dec 15 09:01:10.277000 audit: BPF prog-id=60 op=UNLOAD Dec 15 09:01:10.286273 kernel: audit: type=1334 audit(1765789270.277:301): prog-id=60 op=UNLOAD Dec 15 09:01:10.277000 audit: BPF prog-id=71 op=LOAD Dec 15 09:01:10.277000 audit: BPF prog-id=72 op=LOAD Dec 15 09:01:10.277000 audit: BPF prog-id=61 op=UNLOAD Dec 15 09:01:10.277000 audit: BPF prog-id=62 op=UNLOAD Dec 15 09:01:10.279000 audit: BPF prog-id=73 op=LOAD Dec 15 09:01:10.279000 audit: BPF prog-id=54 op=UNLOAD Dec 15 09:01:10.279000 audit: BPF prog-id=74 op=LOAD Dec 15 09:01:10.279000 audit: BPF prog-id=75 op=LOAD Dec 15 09:01:10.279000 audit: BPF prog-id=55 op=UNLOAD Dec 15 09:01:10.279000 audit: BPF prog-id=56 op=UNLOAD Dec 15 09:01:10.279000 audit: BPF prog-id=76 op=LOAD Dec 15 09:01:10.279000 audit: BPF prog-id=63 op=UNLOAD Dec 15 09:01:10.295000 audit: BPF prog-id=77 op=LOAD Dec 15 09:01:10.295000 audit: BPF prog-id=47 op=UNLOAD Dec 15 09:01:10.295000 audit: BPF prog-id=78 op=LOAD Dec 15 09:01:10.295000 audit: BPF prog-id=79 op=LOAD Dec 15 09:01:10.295000 audit: BPF prog-id=48 op=UNLOAD Dec 15 09:01:10.295000 audit: BPF prog-id=49 op=UNLOAD Dec 15 09:01:10.296000 audit: BPF prog-id=80 op=LOAD Dec 15 09:01:10.296000 audit: BPF prog-id=51 op=UNLOAD Dec 15 09:01:10.297000 audit: BPF prog-id=81 op=LOAD Dec 15 09:01:10.297000 audit: BPF prog-id=82 op=LOAD Dec 15 09:01:10.297000 audit: BPF prog-id=52 op=UNLOAD Dec 15 09:01:10.297000 audit: BPF prog-id=53 op=UNLOAD Dec 15 09:01:10.298000 audit: BPF prog-id=83 op=LOAD Dec 15 09:01:10.298000 audit: BPF prog-id=50 op=UNLOAD Dec 15 09:01:10.301000 audit: BPF prog-id=84 op=LOAD Dec 15 09:01:10.301000 audit: BPF prog-id=64 op=UNLOAD Dec 15 09:01:10.301000 audit: BPF prog-id=85 op=LOAD Dec 15 09:01:10.301000 audit: BPF prog-id=86 op=LOAD Dec 15 09:01:10.301000 audit: BPF prog-id=65 op=UNLOAD Dec 15 09:01:10.301000 audit: BPF prog-id=66 op=UNLOAD Dec 15 09:01:10.338656 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 15 09:01:10.338781 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 15 09:01:10.339245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 15 09:01:10.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 15 09:01:10.339310 systemd[1]: kubelet.service: Consumed 184ms CPU time, 98.5M memory peak. Dec 15 09:01:10.341300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 15 09:01:10.547587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 15 09:01:10.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:10.553238 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 15 09:01:10.614950 kubelet[2399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 15 09:01:10.615880 kubelet[2399]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 15 09:01:10.615880 kubelet[2399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 15 09:01:10.615880 kubelet[2399]: I1215 09:01:10.615643 2399 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 15 09:01:10.837552 kubelet[2399]: I1215 09:01:10.837404 2399 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 15 09:01:10.837552 kubelet[2399]: I1215 09:01:10.837445 2399 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 15 09:01:10.837874 kubelet[2399]: I1215 09:01:10.837765 2399 server.go:956] "Client rotation is on, will bootstrap in background" Dec 15 09:01:10.866632 kubelet[2399]: E1215 09:01:10.866530 2399 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 15 09:01:10.866870 kubelet[2399]: I1215 09:01:10.866838 2399 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 15 09:01:10.879432 kubelet[2399]: I1215 09:01:10.879397 2399 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 15 09:01:10.885587 kubelet[2399]: I1215 09:01:10.885554 2399 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 15 09:01:10.885869 kubelet[2399]: I1215 09:01:10.885827 2399 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 15 09:01:10.886042 kubelet[2399]: I1215 09:01:10.885858 2399 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 15 09:01:10.886140 kubelet[2399]: I1215 09:01:10.886042 2399 topology_manager.go:138] "Creating topology manager with none policy" Dec 15 09:01:10.886140 kubelet[2399]: I1215 09:01:10.886051 2399 container_manager_linux.go:303] "Creating device plugin manager" Dec 15 09:01:10.886811 kubelet[2399]: I1215 09:01:10.886776 2399 state_mem.go:36] "Initialized new in-memory state store" Dec 15 09:01:10.888819 kubelet[2399]: I1215 09:01:10.888784 2399 kubelet.go:480] "Attempting to sync node with API server" Dec 15 09:01:10.888867 kubelet[2399]: I1215 09:01:10.888821 2399 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 15 09:01:10.888867 kubelet[2399]: I1215 09:01:10.888856 2399 kubelet.go:386] "Adding apiserver pod source" Dec 15 09:01:10.888909 kubelet[2399]: I1215 09:01:10.888871 2399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 15 09:01:10.893397 kubelet[2399]: I1215 09:01:10.893361 2399 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 15 09:01:10.895343 kubelet[2399]: I1215 09:01:10.893945 2399 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 15 09:01:10.896889 kubelet[2399]: E1215 09:01:10.896863 2399 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 15 09:01:10.897019 kubelet[2399]: E1215 09:01:10.896976 2399 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 15 09:01:10.897721 kubelet[2399]: W1215 09:01:10.897688 2399 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 15 09:01:10.900990 kubelet[2399]: I1215 09:01:10.900968 2399 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 15 09:01:10.901050 kubelet[2399]: I1215 09:01:10.901016 2399 server.go:1289] "Started kubelet" Dec 15 09:01:10.901186 kubelet[2399]: I1215 09:01:10.901133 2399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 15 09:01:10.902437 kubelet[2399]: I1215 09:01:10.902419 2399 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 15 09:01:10.902527 kubelet[2399]: I1215 09:01:10.902425 2399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 15 09:01:10.954329 kubelet[2399]: I1215 09:01:10.954254 2399 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 15 09:01:10.954785 kubelet[2399]: E1215 09:01:10.954708 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:10.954785 kubelet[2399]: I1215 09:01:10.954773 2399 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 15 09:01:10.955139 kubelet[2399]: I1215 09:01:10.955129 2399 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 15 09:01:10.955263 kubelet[2399]: I1215 09:01:10.955226 2399 reconciler.go:26] "Reconciler: start to sync state" Dec 15 09:01:10.956859 kubelet[2399]: I1215 09:01:10.956846 2399 server.go:317] "Adding debug handlers to kubelet server" Dec 15 09:01:10.956995 kubelet[2399]: E1215 09:01:10.956968 2399 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 15 09:01:10.957376 kubelet[2399]: E1215 09:01:10.957343 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Dec 15 09:01:10.957820 kubelet[2399]: I1215 09:01:10.957761 2399 factory.go:223] Registration of the systemd container factory successfully Dec 15 09:01:10.957886 kubelet[2399]: I1215 09:01:10.957866 2399 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 15 09:01:10.959472 kubelet[2399]: E1215 09:01:10.955974 2399 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188157fc803fd11b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-15 09:01:10.900986139 +0000 UTC m=+0.338144551,LastTimestamp:2025-12-15 09:01:10.900986139 +0000 UTC m=+0.338144551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 15 09:01:10.960590 kubelet[2399]: E1215 09:01:10.960568 2399 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 15 09:01:10.960818 kubelet[2399]: I1215 09:01:10.960673 2399 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 15 09:01:10.962141 kubelet[2399]: I1215 09:01:10.962117 2399 factory.go:223] Registration of the containerd container factory successfully Dec 15 09:01:10.964000 audit[2418]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:10.964000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff97d93f00 a2=0 a3=0 items=0 ppid=2399 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 15 09:01:10.965000 audit[2419]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:10.965000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd425c23a0 a2=0 a3=0 items=0 ppid=2399 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 15 09:01:10.968000 audit[2421]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:10.968000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe4d85b660 a2=0 a3=0 items=0 ppid=2399 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 15 09:01:10.971000 audit[2423]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:10.971000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdffd98190 a2=0 a3=0 items=0 ppid=2399 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 15 09:01:10.978000 audit[2428]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:10.978000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd60ec8000 a2=0 a3=0 items=0 ppid=2399 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.978000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 15 09:01:10.979755 kubelet[2399]: I1215 09:01:10.979723 2399 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 15 09:01:10.980000 audit[2430]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:10.980000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee859c230 a2=0 a3=0 items=0 ppid=2399 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 15 09:01:10.981507 kubelet[2399]: I1215 09:01:10.980969 2399 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 15 09:01:10.981507 kubelet[2399]: I1215 09:01:10.980985 2399 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 15 09:01:10.981507 kubelet[2399]: I1215 09:01:10.981002 2399 state_mem.go:36] "Initialized new in-memory state store" Dec 15 09:01:10.980000 audit[2429]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:10.980000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff3b2586d0 a2=0 a3=0 items=0 ppid=2399 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 15 09:01:10.982263 kubelet[2399]: I1215 09:01:10.982004 2399 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 15 09:01:10.982263 kubelet[2399]: I1215 09:01:10.982025 2399 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 15 09:01:10.982263 kubelet[2399]: I1215 09:01:10.982057 2399 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 15 09:01:10.982263 kubelet[2399]: I1215 09:01:10.982066 2399 kubelet.go:2436] "Starting kubelet main sync loop" Dec 15 09:01:10.982263 kubelet[2399]: E1215 09:01:10.982106 2399 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 15 09:01:10.982000 audit[2432]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:10.982000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbaef24f0 a2=0 a3=0 items=0 ppid=2399 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 15 09:01:10.982000 audit[2433]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:10.982000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea6fb8440 a2=0 a3=0 items=0 ppid=2399 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 15 09:01:10.983000 audit[2434]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:10.983000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf9f61d10 a2=0 a3=0 items=0 ppid=2399 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 15 09:01:10.983000 audit[2435]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:10.983000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff07d7f670 a2=0 a3=0 items=0 ppid=2399 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.983000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 15 09:01:10.985000 audit[2436]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:10.985000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf88f0550 a2=0 a3=0 items=0 ppid=2399 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:10.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 15 09:01:11.055857 kubelet[2399]: E1215 09:01:11.055766 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:11.083174 kubelet[2399]: E1215 09:01:11.083105 2399 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 15 09:01:11.156478 kubelet[2399]: E1215 09:01:11.156378 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:11.157873 kubelet[2399]: E1215 09:01:11.157849 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Dec 15 09:01:11.257104 kubelet[2399]: E1215 09:01:11.257054 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:11.283216 kubelet[2399]: E1215 09:01:11.283180 2399 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 15 09:01:11.357628 kubelet[2399]: E1215 09:01:11.357543 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:11.458686 kubelet[2399]: E1215 09:01:11.458543 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:11.509898 kubelet[2399]: E1215 09:01:11.509836 2399 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 15 09:01:11.510000 kubelet[2399]: I1215 09:01:11.509988 2399 policy_none.go:49] "None policy: Start" Dec 15 09:01:11.510045 kubelet[2399]: I1215 09:01:11.510006 2399 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 15 09:01:11.510045 kubelet[2399]: I1215 09:01:11.510018 2399 state_mem.go:35] "Initializing new in-memory state store" Dec 15 09:01:11.516113 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 15 09:01:11.535659 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 15 09:01:11.539308 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 15 09:01:11.558334 kubelet[2399]: E1215 09:01:11.558109 2399 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 15 09:01:11.558464 kubelet[2399]: I1215 09:01:11.558438 2399 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 15 09:01:11.558501 kubelet[2399]: E1215 09:01:11.558434 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Dec 15 09:01:11.558501 kubelet[2399]: I1215 09:01:11.558462 2399 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 15 09:01:11.558952 kubelet[2399]: I1215 09:01:11.558783 2399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 15 09:01:11.559815 kubelet[2399]: E1215 09:01:11.559785 2399 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 15 09:01:11.560129 kubelet[2399]: E1215 09:01:11.560113 2399 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 15 09:01:11.660294 kubelet[2399]: I1215 09:01:11.660252 2399 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 15 09:01:11.660774 kubelet[2399]: E1215 09:01:11.660652 2399 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Dec 15 09:01:11.696398 systemd[1]: Created slice kubepods-burstable-pod22a7fc3e53836ce4eb1d2317a849d94a.slice - libcontainer container kubepods-burstable-pod22a7fc3e53836ce4eb1d2317a849d94a.slice. Dec 15 09:01:11.713790 kubelet[2399]: E1215 09:01:11.713704 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:11.715634 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 15 09:01:11.727435 kubelet[2399]: E1215 09:01:11.727409 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:11.730639 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 15 09:01:11.732413 kubelet[2399]: E1215 09:01:11.732390 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:11.759014 kubelet[2399]: I1215 09:01:11.758962 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22a7fc3e53836ce4eb1d2317a849d94a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"22a7fc3e53836ce4eb1d2317a849d94a\") " pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:11.759193 kubelet[2399]: I1215 09:01:11.759145 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22a7fc3e53836ce4eb1d2317a849d94a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"22a7fc3e53836ce4eb1d2317a849d94a\") " pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:11.759193 kubelet[2399]: I1215 09:01:11.759192 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22a7fc3e53836ce4eb1d2317a849d94a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"22a7fc3e53836ce4eb1d2317a849d94a\") " pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:11.759364 kubelet[2399]: I1215 09:01:11.759300 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:11.759364 kubelet[2399]: I1215 09:01:11.759325 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:11.759364 kubelet[2399]: I1215 09:01:11.759363 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:11.759447 kubelet[2399]: I1215 09:01:11.759391 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:11.759476 kubelet[2399]: I1215 09:01:11.759457 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:11.759502 kubelet[2399]: I1215 09:01:11.759479 2399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 15 09:01:11.861676 kubelet[2399]: I1215 09:01:11.861641 2399 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 15 09:01:11.862084 kubelet[2399]: E1215 09:01:11.862047 2399 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Dec 15 09:01:12.015098 kubelet[2399]: E1215 09:01:12.014966 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:12.015731 containerd[1611]: time="2025-12-15T09:01:12.015675383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:22a7fc3e53836ce4eb1d2317a849d94a,Namespace:kube-system,Attempt:0,}" Dec 15 09:01:12.029000 kubelet[2399]: E1215 09:01:12.028964 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:12.029608 containerd[1611]: time="2025-12-15T09:01:12.029561521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 15 09:01:12.033677 kubelet[2399]: E1215 09:01:12.033639 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:12.033982 containerd[1611]: time="2025-12-15T09:01:12.033932612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 15 09:01:12.069758 containerd[1611]: time="2025-12-15T09:01:12.068946650Z" level=info msg="connecting to shim f3b5b84bfdfd0ea07112513af02b8d0e88f78930e634f6d8e7b1843a31360887" address="unix:///run/containerd/s/a6798a2b9c82217adc48cd376286cc79ddca024df8cb40c560716aa4337a088b" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:01:12.085699 containerd[1611]: time="2025-12-15T09:01:12.085652203Z" level=info msg="connecting to shim 96d4a96c5ef87831657b1174210b845107b19e09f9aa501820e17324aef015f9" address="unix:///run/containerd/s/2c55df10e4207f54f70bd0ba7d0c717d33ff26beebbc7e7be35c96719466666b" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:01:12.132461 containerd[1611]: time="2025-12-15T09:01:12.132422243Z" level=info msg="connecting to shim 46a07d70a2404f8bd4633d222a1912e087db59d3264cccaec941508457e57c36" address="unix:///run/containerd/s/9643a921d1ea3f4dce02a815f2ff357318ff56fef431eeb9818f78e35e28f31f" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:01:12.135076 systemd[1]: Started cri-containerd-f3b5b84bfdfd0ea07112513af02b8d0e88f78930e634f6d8e7b1843a31360887.scope - libcontainer container f3b5b84bfdfd0ea07112513af02b8d0e88f78930e634f6d8e7b1843a31360887. Dec 15 09:01:12.142055 systemd[1]: Started cri-containerd-96d4a96c5ef87831657b1174210b845107b19e09f9aa501820e17324aef015f9.scope - libcontainer container 96d4a96c5ef87831657b1174210b845107b19e09f9aa501820e17324aef015f9. Dec 15 09:01:12.158000 audit: BPF prog-id=87 op=LOAD Dec 15 09:01:12.158000 audit: BPF prog-id=88 op=LOAD Dec 15 09:01:12.158000 audit[2496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2473 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643461393663356566383738333136353762313137343231306238 Dec 15 09:01:12.158000 audit: BPF prog-id=88 op=UNLOAD Dec 15 09:01:12.158000 audit[2496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643461393663356566383738333136353762313137343231306238 Dec 15 09:01:12.158000 audit: BPF prog-id=89 op=LOAD Dec 15 09:01:12.158000 audit[2496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2473 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643461393663356566383738333136353762313137343231306238 Dec 15 09:01:12.158000 audit: BPF prog-id=90 op=LOAD Dec 15 09:01:12.158000 audit[2496]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2473 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643461393663356566383738333136353762313137343231306238 Dec 15 09:01:12.158000 audit: BPF prog-id=90 op=UNLOAD Dec 15 09:01:12.158000 audit[2496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643461393663356566383738333136353762313137343231306238 Dec 15 09:01:12.158000 audit: BPF prog-id=89 op=UNLOAD Dec 15 09:01:12.158000 audit[2496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643461393663356566383738333136353762313137343231306238 Dec 15 09:01:12.158000 audit: BPF prog-id=91 op=LOAD Dec 15 09:01:12.158000 audit[2496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2473 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936643461393663356566383738333136353762313137343231306238 Dec 15 09:01:12.209302 kubelet[2399]: E1215 09:01:12.209243 2399 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 15 09:01:12.232042 systemd[1]: Started cri-containerd-46a07d70a2404f8bd4633d222a1912e087db59d3264cccaec941508457e57c36.scope - libcontainer container 46a07d70a2404f8bd4633d222a1912e087db59d3264cccaec941508457e57c36. Dec 15 09:01:12.234000 audit: BPF prog-id=92 op=LOAD Dec 15 09:01:12.234000 audit: BPF prog-id=93 op=LOAD Dec 15 09:01:12.234000 audit[2461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2447 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623562383462666466643065613037313132353133616630326238 Dec 15 09:01:12.234000 audit: BPF prog-id=93 op=UNLOAD Dec 15 09:01:12.234000 audit[2461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2447 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623562383462666466643065613037313132353133616630326238 Dec 15 09:01:12.234000 audit: BPF prog-id=94 op=LOAD Dec 15 09:01:12.234000 audit[2461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2447 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623562383462666466643065613037313132353133616630326238 Dec 15 09:01:12.235000 audit: BPF prog-id=95 op=LOAD Dec 15 09:01:12.235000 audit[2461]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2447 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623562383462666466643065613037313132353133616630326238 Dec 15 09:01:12.235000 audit: BPF prog-id=95 op=UNLOAD Dec 15 09:01:12.235000 audit[2461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2447 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623562383462666466643065613037313132353133616630326238 Dec 15 09:01:12.235000 audit: BPF prog-id=94 op=UNLOAD Dec 15 09:01:12.235000 audit[2461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2447 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623562383462666466643065613037313132353133616630326238 Dec 15 09:01:12.235000 audit: BPF prog-id=96 op=LOAD Dec 15 09:01:12.235000 audit[2461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2447 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623562383462666466643065613037313132353133616630326238 Dec 15 09:01:12.247000 audit: BPF prog-id=97 op=LOAD Dec 15 09:01:12.247000 audit: BPF prog-id=98 op=LOAD Dec 15 09:01:12.247000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2507 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436613037643730613234303466386264343633336432323261313931 Dec 15 09:01:12.247000 audit: BPF prog-id=98 op=UNLOAD Dec 15 09:01:12.247000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2507 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436613037643730613234303466386264343633336432323261313931 Dec 15 09:01:12.247000 audit: BPF prog-id=99 op=LOAD Dec 15 09:01:12.247000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2507 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436613037643730613234303466386264343633336432323261313931 Dec 15 09:01:12.247000 audit: BPF prog-id=100 op=LOAD Dec 15 09:01:12.247000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2507 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436613037643730613234303466386264343633336432323261313931 Dec 15 09:01:12.247000 audit: BPF prog-id=100 op=UNLOAD Dec 15 09:01:12.247000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2507 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436613037643730613234303466386264343633336432323261313931 Dec 15 09:01:12.247000 audit: BPF prog-id=99 op=UNLOAD Dec 15 09:01:12.247000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2507 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436613037643730613234303466386264343633336432323261313931 Dec 15 09:01:12.247000 audit: BPF prog-id=101 op=LOAD Dec 15 09:01:12.247000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2507 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436613037643730613234303466386264343633336432323261313931 Dec 15 09:01:12.314413 kubelet[2399]: I1215 09:01:12.314210 2399 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 15 09:01:12.319828 containerd[1611]: time="2025-12-15T09:01:12.319407268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d4a96c5ef87831657b1174210b845107b19e09f9aa501820e17324aef015f9\"" Dec 15 09:01:12.322031 kubelet[2399]: E1215 09:01:12.321870 2399 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Dec 15 09:01:12.323108 kubelet[2399]: E1215 09:01:12.323089 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:12.332401 containerd[1611]: time="2025-12-15T09:01:12.332331867Z" level=info msg="CreateContainer within sandbox \"96d4a96c5ef87831657b1174210b845107b19e09f9aa501820e17324aef015f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 15 09:01:12.334364 containerd[1611]: time="2025-12-15T09:01:12.334337014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:22a7fc3e53836ce4eb1d2317a849d94a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3b5b84bfdfd0ea07112513af02b8d0e88f78930e634f6d8e7b1843a31360887\"" Dec 15 09:01:12.335665 kubelet[2399]: E1215 09:01:12.335631 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:12.338864 kubelet[2399]: E1215 09:01:12.338799 2399 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 15 09:01:12.341053 containerd[1611]: time="2025-12-15T09:01:12.341017858Z" level=info msg="CreateContainer within sandbox \"f3b5b84bfdfd0ea07112513af02b8d0e88f78930e634f6d8e7b1843a31360887\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 15 09:01:12.345011 containerd[1611]: time="2025-12-15T09:01:12.344944125Z" level=info msg="Container 1cfa00b642a18e49b296a66cf9055a88d154d10ddccee12468b2c1132109c734: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:12.345168 containerd[1611]: time="2025-12-15T09:01:12.345141894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"46a07d70a2404f8bd4633d222a1912e087db59d3264cccaec941508457e57c36\"" Dec 15 09:01:12.345829 kubelet[2399]: E1215 09:01:12.345684 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:12.347728 kubelet[2399]: E1215 09:01:12.347671 2399 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 15 09:01:12.352005 containerd[1611]: time="2025-12-15T09:01:12.351954222Z" level=info msg="CreateContainer within sandbox \"46a07d70a2404f8bd4633d222a1912e087db59d3264cccaec941508457e57c36\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 15 09:01:12.357993 containerd[1611]: time="2025-12-15T09:01:12.357940474Z" level=info msg="CreateContainer within sandbox \"96d4a96c5ef87831657b1174210b845107b19e09f9aa501820e17324aef015f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1cfa00b642a18e49b296a66cf9055a88d154d10ddccee12468b2c1132109c734\"" Dec 15 09:01:12.358683 containerd[1611]: time="2025-12-15T09:01:12.358653658Z" level=info msg="StartContainer for \"1cfa00b642a18e49b296a66cf9055a88d154d10ddccee12468b2c1132109c734\"" Dec 15 09:01:12.359023 kubelet[2399]: E1215 09:01:12.358985 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Dec 15 09:01:12.360368 containerd[1611]: time="2025-12-15T09:01:12.359993075Z" level=info msg="connecting to shim 1cfa00b642a18e49b296a66cf9055a88d154d10ddccee12468b2c1132109c734" address="unix:///run/containerd/s/2c55df10e4207f54f70bd0ba7d0c717d33ff26beebbc7e7be35c96719466666b" protocol=ttrpc version=3 Dec 15 09:01:12.360368 containerd[1611]: time="2025-12-15T09:01:12.360075093Z" level=info msg="Container fe043d5b23841b844c93c929f6986c5db9e1b781c2ded76de71a5c6b33467c0b: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:12.369325 containerd[1611]: time="2025-12-15T09:01:12.369276260Z" level=info msg="CreateContainer within sandbox \"f3b5b84bfdfd0ea07112513af02b8d0e88f78930e634f6d8e7b1843a31360887\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe043d5b23841b844c93c929f6986c5db9e1b781c2ded76de71a5c6b33467c0b\"" Dec 15 09:01:12.369837 containerd[1611]: time="2025-12-15T09:01:12.369781707Z" level=info msg="StartContainer for \"fe043d5b23841b844c93c929f6986c5db9e1b781c2ded76de71a5c6b33467c0b\"" Dec 15 09:01:12.370993 containerd[1611]: time="2025-12-15T09:01:12.370961451Z" level=info msg="connecting to shim fe043d5b23841b844c93c929f6986c5db9e1b781c2ded76de71a5c6b33467c0b" address="unix:///run/containerd/s/a6798a2b9c82217adc48cd376286cc79ddca024df8cb40c560716aa4337a088b" protocol=ttrpc version=3 Dec 15 09:01:12.371290 containerd[1611]: time="2025-12-15T09:01:12.371252156Z" level=info msg="Container 99169f440713470b5b91ef616977c017e17af893f989c3dac6d35568edc75e1f: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:12.379528 containerd[1611]: time="2025-12-15T09:01:12.379461332Z" level=info msg="CreateContainer within sandbox \"46a07d70a2404f8bd4633d222a1912e087db59d3264cccaec941508457e57c36\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"99169f440713470b5b91ef616977c017e17af893f989c3dac6d35568edc75e1f\"" Dec 15 09:01:12.380105 systemd[1]: Started cri-containerd-1cfa00b642a18e49b296a66cf9055a88d154d10ddccee12468b2c1132109c734.scope - libcontainer container 1cfa00b642a18e49b296a66cf9055a88d154d10ddccee12468b2c1132109c734. Dec 15 09:01:12.381882 containerd[1611]: time="2025-12-15T09:01:12.381844346Z" level=info msg="StartContainer for \"99169f440713470b5b91ef616977c017e17af893f989c3dac6d35568edc75e1f\"" Dec 15 09:01:12.383076 containerd[1611]: time="2025-12-15T09:01:12.383034268Z" level=info msg="connecting to shim 99169f440713470b5b91ef616977c017e17af893f989c3dac6d35568edc75e1f" address="unix:///run/containerd/s/9643a921d1ea3f4dce02a815f2ff357318ff56fef431eeb9818f78e35e28f31f" protocol=ttrpc version=3 Dec 15 09:01:12.397207 systemd[1]: Started cri-containerd-fe043d5b23841b844c93c929f6986c5db9e1b781c2ded76de71a5c6b33467c0b.scope - libcontainer container fe043d5b23841b844c93c929f6986c5db9e1b781c2ded76de71a5c6b33467c0b. Dec 15 09:01:12.399244 kubelet[2399]: E1215 09:01:12.399184 2399 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 15 09:01:12.407985 systemd[1]: Started cri-containerd-99169f440713470b5b91ef616977c017e17af893f989c3dac6d35568edc75e1f.scope - libcontainer container 99169f440713470b5b91ef616977c017e17af893f989c3dac6d35568edc75e1f. Dec 15 09:01:12.410000 audit: BPF prog-id=102 op=LOAD Dec 15 09:01:12.410000 audit: BPF prog-id=103 op=LOAD Dec 15 09:01:12.410000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2473 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163666130306236343261313865343962323936613636636639303535 Dec 15 09:01:12.411000 audit: BPF prog-id=103 op=UNLOAD Dec 15 09:01:12.411000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163666130306236343261313865343962323936613636636639303535 Dec 15 09:01:12.411000 audit: BPF prog-id=104 op=LOAD Dec 15 09:01:12.411000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2473 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163666130306236343261313865343962323936613636636639303535 Dec 15 09:01:12.411000 audit: BPF prog-id=105 op=LOAD Dec 15 09:01:12.411000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2473 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163666130306236343261313865343962323936613636636639303535 Dec 15 09:01:12.411000 audit: BPF prog-id=105 op=UNLOAD Dec 15 09:01:12.411000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163666130306236343261313865343962323936613636636639303535 Dec 15 09:01:12.411000 audit: BPF prog-id=104 op=UNLOAD Dec 15 09:01:12.411000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2473 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163666130306236343261313865343962323936613636636639303535 Dec 15 09:01:12.411000 audit: BPF prog-id=106 op=LOAD Dec 15 09:01:12.411000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2473 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163666130306236343261313865343962323936613636636639303535 Dec 15 09:01:12.415000 audit: BPF prog-id=107 op=LOAD Dec 15 09:01:12.416000 audit: BPF prog-id=108 op=LOAD Dec 15 09:01:12.416000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2447 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303433643562323338343162383434633933633932396636393836 Dec 15 09:01:12.416000 audit: BPF prog-id=108 op=UNLOAD Dec 15 09:01:12.416000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2447 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303433643562323338343162383434633933633932396636393836 Dec 15 09:01:12.416000 audit: BPF prog-id=109 op=LOAD Dec 15 09:01:12.416000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2447 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303433643562323338343162383434633933633932396636393836 Dec 15 09:01:12.417000 audit: BPF prog-id=110 op=LOAD Dec 15 09:01:12.417000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2447 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303433643562323338343162383434633933633932396636393836 Dec 15 09:01:12.417000 audit: BPF prog-id=110 op=UNLOAD Dec 15 09:01:12.417000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2447 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303433643562323338343162383434633933633932396636393836 Dec 15 09:01:12.417000 audit: BPF prog-id=109 op=UNLOAD Dec 15 09:01:12.417000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2447 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303433643562323338343162383434633933633932396636393836 Dec 15 09:01:12.417000 audit: BPF prog-id=111 op=LOAD Dec 15 09:01:12.417000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2447 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303433643562323338343162383434633933633932396636393836 Dec 15 09:01:12.479000 audit: BPF prog-id=112 op=LOAD Dec 15 09:01:12.480000 audit: BPF prog-id=113 op=LOAD Dec 15 09:01:12.480000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2507 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313639663434303731333437306235623931656636313639373763 Dec 15 09:01:12.480000 audit: BPF prog-id=113 op=UNLOAD Dec 15 09:01:12.480000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2507 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313639663434303731333437306235623931656636313639373763 Dec 15 09:01:12.480000 audit: BPF prog-id=114 op=LOAD Dec 15 09:01:12.480000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2507 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313639663434303731333437306235623931656636313639373763 Dec 15 09:01:12.480000 audit: BPF prog-id=115 op=LOAD Dec 15 09:01:12.480000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2507 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313639663434303731333437306235623931656636313639373763 Dec 15 09:01:12.480000 audit: BPF prog-id=115 op=UNLOAD Dec 15 09:01:12.480000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2507 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313639663434303731333437306235623931656636313639373763 Dec 15 09:01:12.480000 audit: BPF prog-id=114 op=UNLOAD Dec 15 09:01:12.480000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2507 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313639663434303731333437306235623931656636313639373763 Dec 15 09:01:12.480000 audit: BPF prog-id=116 op=LOAD Dec 15 09:01:12.480000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2507 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:12.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939313639663434303731333437306235623931656636313639373763 Dec 15 09:01:12.514168 containerd[1611]: time="2025-12-15T09:01:12.514113131Z" level=info msg="StartContainer for \"fe043d5b23841b844c93c929f6986c5db9e1b781c2ded76de71a5c6b33467c0b\" returns successfully" Dec 15 09:01:12.515899 containerd[1611]: time="2025-12-15T09:01:12.515517555Z" level=info msg="StartContainer for \"1cfa00b642a18e49b296a66cf9055a88d154d10ddccee12468b2c1132109c734\" returns successfully" Dec 15 09:01:12.539492 containerd[1611]: time="2025-12-15T09:01:12.539445634Z" level=info msg="StartContainer for \"99169f440713470b5b91ef616977c017e17af893f989c3dac6d35568edc75e1f\" returns successfully" Dec 15 09:01:12.997401 kubelet[2399]: E1215 09:01:12.997338 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:13.001867 kubelet[2399]: E1215 09:01:12.998003 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:13.004220 kubelet[2399]: E1215 09:01:13.004197 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:13.004761 kubelet[2399]: E1215 09:01:13.004743 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:13.007199 kubelet[2399]: E1215 09:01:13.007184 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:13.007374 kubelet[2399]: E1215 09:01:13.007359 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:13.124865 kubelet[2399]: I1215 09:01:13.124795 2399 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 15 09:01:13.941651 kubelet[2399]: I1215 09:01:13.941598 2399 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 15 09:01:13.941651 kubelet[2399]: E1215 09:01:13.941664 2399 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 15 09:01:13.957730 kubelet[2399]: E1215 09:01:13.957421 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.008721 kubelet[2399]: E1215 09:01:14.008688 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:14.009164 kubelet[2399]: E1215 09:01:14.008831 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:14.009164 kubelet[2399]: E1215 09:01:14.008926 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:14.009164 kubelet[2399]: E1215 09:01:14.009091 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:14.057580 kubelet[2399]: E1215 09:01:14.057550 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.158237 kubelet[2399]: E1215 09:01:14.158193 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.258777 kubelet[2399]: E1215 09:01:14.258637 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.359471 kubelet[2399]: E1215 09:01:14.359410 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.460412 kubelet[2399]: E1215 09:01:14.460374 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.561336 kubelet[2399]: E1215 09:01:14.561188 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.583210 kubelet[2399]: E1215 09:01:14.583073 2399 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 15 09:01:14.583347 kubelet[2399]: E1215 09:01:14.583234 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:14.661936 kubelet[2399]: E1215 09:01:14.661875 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.762834 kubelet[2399]: E1215 09:01:14.762736 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.864093 kubelet[2399]: E1215 09:01:14.863897 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:14.965061 kubelet[2399]: E1215 09:01:14.965022 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:15.065574 kubelet[2399]: E1215 09:01:15.065525 2399 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 15 09:01:15.158006 kubelet[2399]: I1215 09:01:15.157893 2399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 15 09:01:15.164672 kubelet[2399]: I1215 09:01:15.164646 2399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:15.168979 kubelet[2399]: I1215 09:01:15.168928 2399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:15.895585 kubelet[2399]: I1215 09:01:15.895524 2399 apiserver.go:52] "Watching apiserver" Dec 15 09:01:15.898394 kubelet[2399]: E1215 09:01:15.898102 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:15.898394 kubelet[2399]: E1215 09:01:15.898126 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:15.898394 kubelet[2399]: E1215 09:01:15.898333 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:15.955445 kubelet[2399]: I1215 09:01:15.955395 2399 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 15 09:01:16.004342 systemd[1]: Reload requested from client PID 2688 ('systemctl') (unit session-8.scope)... Dec 15 09:01:16.004359 systemd[1]: Reloading... Dec 15 09:01:16.094835 zram_generator::config[2735]: No configuration found. Dec 15 09:01:16.316409 kubelet[2399]: E1215 09:01:16.316300 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:16.344841 systemd[1]: Reloading finished in 340 ms. Dec 15 09:01:16.373442 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 15 09:01:16.388953 systemd[1]: kubelet.service: Deactivated successfully. Dec 15 09:01:16.389296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 15 09:01:16.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:16.389356 systemd[1]: kubelet.service: Consumed 934ms CPU time, 130.8M memory peak. Dec 15 09:01:16.390359 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 15 09:01:16.390421 kernel: audit: type=1131 audit(1765789276.387:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:16.391187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 15 09:01:16.389000 audit: BPF prog-id=117 op=LOAD Dec 15 09:01:16.395260 kernel: audit: type=1334 audit(1765789276.389:397): prog-id=117 op=LOAD Dec 15 09:01:16.395304 kernel: audit: type=1334 audit(1765789276.389:398): prog-id=70 op=UNLOAD Dec 15 09:01:16.395322 kernel: audit: type=1334 audit(1765789276.389:399): prog-id=118 op=LOAD Dec 15 09:01:16.395338 kernel: audit: type=1334 audit(1765789276.389:400): prog-id=119 op=LOAD Dec 15 09:01:16.395366 kernel: audit: type=1334 audit(1765789276.389:401): prog-id=71 op=UNLOAD Dec 15 09:01:16.395380 kernel: audit: type=1334 audit(1765789276.389:402): prog-id=72 op=UNLOAD Dec 15 09:01:16.395400 kernel: audit: type=1334 audit(1765789276.389:403): prog-id=120 op=LOAD Dec 15 09:01:16.395413 kernel: audit: type=1334 audit(1765789276.389:404): prog-id=67 op=UNLOAD Dec 15 09:01:16.395430 kernel: audit: type=1334 audit(1765789276.392:405): prog-id=121 op=LOAD Dec 15 09:01:16.389000 audit: BPF prog-id=70 op=UNLOAD Dec 15 09:01:16.389000 audit: BPF prog-id=118 op=LOAD Dec 15 09:01:16.389000 audit: BPF prog-id=119 op=LOAD Dec 15 09:01:16.389000 audit: BPF prog-id=71 op=UNLOAD Dec 15 09:01:16.389000 audit: BPF prog-id=72 op=UNLOAD Dec 15 09:01:16.389000 audit: BPF prog-id=120 op=LOAD Dec 15 09:01:16.389000 audit: BPF prog-id=67 op=UNLOAD Dec 15 09:01:16.392000 audit: BPF prog-id=121 op=LOAD Dec 15 09:01:16.392000 audit: BPF prog-id=77 op=UNLOAD Dec 15 09:01:16.392000 audit: BPF prog-id=122 op=LOAD Dec 15 09:01:16.392000 audit: BPF prog-id=123 op=LOAD Dec 15 09:01:16.394000 audit: BPF prog-id=78 op=UNLOAD Dec 15 09:01:16.394000 audit: BPF prog-id=79 op=UNLOAD Dec 15 09:01:16.394000 audit: BPF prog-id=124 op=LOAD Dec 15 09:01:16.394000 audit: BPF prog-id=83 op=UNLOAD Dec 15 09:01:16.395000 audit: BPF prog-id=125 op=LOAD Dec 15 09:01:16.395000 audit: BPF prog-id=76 op=UNLOAD Dec 15 09:01:16.398000 audit: BPF prog-id=126 op=LOAD Dec 15 09:01:16.409000 audit: BPF prog-id=84 op=UNLOAD Dec 15 09:01:16.409000 audit: BPF prog-id=127 op=LOAD Dec 15 09:01:16.409000 audit: BPF prog-id=128 op=LOAD Dec 15 09:01:16.409000 audit: BPF prog-id=85 op=UNLOAD Dec 15 09:01:16.409000 audit: BPF prog-id=86 op=UNLOAD Dec 15 09:01:16.411000 audit: BPF prog-id=129 op=LOAD Dec 15 09:01:16.411000 audit: BPF prog-id=80 op=UNLOAD Dec 15 09:01:16.411000 audit: BPF prog-id=130 op=LOAD Dec 15 09:01:16.412000 audit: BPF prog-id=131 op=LOAD Dec 15 09:01:16.412000 audit: BPF prog-id=81 op=UNLOAD Dec 15 09:01:16.412000 audit: BPF prog-id=82 op=UNLOAD Dec 15 09:01:16.412000 audit: BPF prog-id=132 op=LOAD Dec 15 09:01:16.412000 audit: BPF prog-id=133 op=LOAD Dec 15 09:01:16.412000 audit: BPF prog-id=68 op=UNLOAD Dec 15 09:01:16.412000 audit: BPF prog-id=69 op=UNLOAD Dec 15 09:01:16.413000 audit: BPF prog-id=134 op=LOAD Dec 15 09:01:16.413000 audit: BPF prog-id=73 op=UNLOAD Dec 15 09:01:16.413000 audit: BPF prog-id=135 op=LOAD Dec 15 09:01:16.413000 audit: BPF prog-id=136 op=LOAD Dec 15 09:01:16.413000 audit: BPF prog-id=74 op=UNLOAD Dec 15 09:01:16.413000 audit: BPF prog-id=75 op=UNLOAD Dec 15 09:01:16.597579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 15 09:01:16.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:16.601995 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 15 09:01:16.667016 kubelet[2779]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 15 09:01:16.667016 kubelet[2779]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 15 09:01:16.667016 kubelet[2779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 15 09:01:16.667414 kubelet[2779]: I1215 09:01:16.667049 2779 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 15 09:01:16.672997 kubelet[2779]: I1215 09:01:16.672968 2779 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 15 09:01:16.672997 kubelet[2779]: I1215 09:01:16.672987 2779 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 15 09:01:16.673596 kubelet[2779]: I1215 09:01:16.673556 2779 server.go:956] "Client rotation is on, will bootstrap in background" Dec 15 09:01:16.676030 kubelet[2779]: I1215 09:01:16.675973 2779 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 15 09:01:16.679162 kubelet[2779]: I1215 09:01:16.678829 2779 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 15 09:01:16.684456 kubelet[2779]: I1215 09:01:16.684422 2779 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 15 09:01:16.688833 kubelet[2779]: I1215 09:01:16.688779 2779 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 15 09:01:16.689049 kubelet[2779]: I1215 09:01:16.689013 2779 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 15 09:01:16.689198 kubelet[2779]: I1215 09:01:16.689037 2779 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 15 09:01:16.689272 kubelet[2779]: I1215 09:01:16.689204 2779 topology_manager.go:138] "Creating topology manager with none policy" Dec 15 09:01:16.689272 kubelet[2779]: I1215 09:01:16.689215 2779 container_manager_linux.go:303] "Creating device plugin manager" Dec 15 09:01:16.689272 kubelet[2779]: I1215 09:01:16.689266 2779 state_mem.go:36] "Initialized new in-memory state store" Dec 15 09:01:16.689439 kubelet[2779]: I1215 09:01:16.689424 2779 kubelet.go:480] "Attempting to sync node with API server" Dec 15 09:01:16.689468 kubelet[2779]: I1215 09:01:16.689442 2779 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 15 09:01:16.689468 kubelet[2779]: I1215 09:01:16.689462 2779 kubelet.go:386] "Adding apiserver pod source" Dec 15 09:01:16.689468 kubelet[2779]: I1215 09:01:16.689477 2779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 15 09:01:16.691419 kubelet[2779]: I1215 09:01:16.691375 2779 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 15 09:01:16.692045 kubelet[2779]: I1215 09:01:16.692017 2779 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 15 09:01:16.698248 kubelet[2779]: I1215 09:01:16.697883 2779 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 15 09:01:16.698248 kubelet[2779]: I1215 09:01:16.697970 2779 server.go:1289] "Started kubelet" Dec 15 09:01:16.700681 kubelet[2779]: I1215 09:01:16.700141 2779 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 15 09:01:16.700681 kubelet[2779]: I1215 09:01:16.700432 2779 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 15 09:01:16.700681 kubelet[2779]: I1215 09:01:16.700479 2779 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 15 09:01:16.700771 kubelet[2779]: I1215 09:01:16.700752 2779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 15 09:01:16.701647 kubelet[2779]: I1215 09:01:16.701624 2779 server.go:317] "Adding debug handlers to kubelet server" Dec 15 09:01:16.704627 kubelet[2779]: I1215 09:01:16.704389 2779 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 15 09:01:16.707018 kubelet[2779]: I1215 09:01:16.707005 2779 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 15 09:01:16.707231 kubelet[2779]: I1215 09:01:16.707219 2779 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 15 09:01:16.707379 kubelet[2779]: I1215 09:01:16.707369 2779 reconciler.go:26] "Reconciler: start to sync state" Dec 15 09:01:16.708316 kubelet[2779]: I1215 09:01:16.708302 2779 factory.go:223] Registration of the systemd container factory successfully Dec 15 09:01:16.708537 kubelet[2779]: I1215 09:01:16.708456 2779 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 15 09:01:16.708628 kubelet[2779]: E1215 09:01:16.708589 2779 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 15 09:01:16.711237 kubelet[2779]: I1215 09:01:16.711218 2779 factory.go:223] Registration of the containerd container factory successfully Dec 15 09:01:16.717210 kubelet[2779]: I1215 09:01:16.717179 2779 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 15 09:01:16.718337 kubelet[2779]: I1215 09:01:16.718310 2779 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 15 09:01:16.718337 kubelet[2779]: I1215 09:01:16.718325 2779 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 15 09:01:16.718413 kubelet[2779]: I1215 09:01:16.718342 2779 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 15 09:01:16.718413 kubelet[2779]: I1215 09:01:16.718349 2779 kubelet.go:2436] "Starting kubelet main sync loop" Dec 15 09:01:16.718413 kubelet[2779]: E1215 09:01:16.718390 2779 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 15 09:01:16.751020 kubelet[2779]: I1215 09:01:16.750984 2779 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 15 09:01:16.751020 kubelet[2779]: I1215 09:01:16.751009 2779 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 15 09:01:16.751161 kubelet[2779]: I1215 09:01:16.751039 2779 state_mem.go:36] "Initialized new in-memory state store" Dec 15 09:01:16.751250 kubelet[2779]: I1215 09:01:16.751227 2779 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 15 09:01:16.751276 kubelet[2779]: I1215 09:01:16.751245 2779 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 15 09:01:16.751276 kubelet[2779]: I1215 09:01:16.751270 2779 policy_none.go:49] "None policy: Start" Dec 15 09:01:16.751317 kubelet[2779]: I1215 09:01:16.751280 2779 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 15 09:01:16.751317 kubelet[2779]: I1215 09:01:16.751294 2779 state_mem.go:35] "Initializing new in-memory state store" Dec 15 09:01:16.751435 kubelet[2779]: I1215 09:01:16.751415 2779 state_mem.go:75] "Updated machine memory state" Dec 15 09:01:16.756163 kubelet[2779]: E1215 09:01:16.756145 2779 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 15 09:01:16.756319 kubelet[2779]: I1215 09:01:16.756307 2779 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 15 09:01:16.756348 kubelet[2779]: I1215 09:01:16.756318 2779 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 15 09:01:16.756561 kubelet[2779]: I1215 09:01:16.756492 2779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 15 09:01:16.758095 kubelet[2779]: E1215 09:01:16.758068 2779 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 15 09:01:16.819918 kubelet[2779]: I1215 09:01:16.819856 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 15 09:01:16.820080 kubelet[2779]: I1215 09:01:16.819957 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:16.820080 kubelet[2779]: I1215 09:01:16.819996 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:16.826441 kubelet[2779]: E1215 09:01:16.826409 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 15 09:01:16.826560 kubelet[2779]: E1215 09:01:16.826508 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:16.827052 kubelet[2779]: E1215 09:01:16.827024 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:16.864356 kubelet[2779]: I1215 09:01:16.864248 2779 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 15 09:01:16.870663 kubelet[2779]: I1215 09:01:16.870609 2779 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 15 09:01:16.870926 kubelet[2779]: I1215 09:01:16.870690 2779 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 15 09:01:16.908298 kubelet[2779]: I1215 09:01:16.908255 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22a7fc3e53836ce4eb1d2317a849d94a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"22a7fc3e53836ce4eb1d2317a849d94a\") " pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:16.908298 kubelet[2779]: I1215 09:01:16.908295 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:16.908298 kubelet[2779]: I1215 09:01:16.908314 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:16.908536 kubelet[2779]: I1215 09:01:16.908331 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 15 09:01:16.908536 kubelet[2779]: I1215 09:01:16.908347 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22a7fc3e53836ce4eb1d2317a849d94a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"22a7fc3e53836ce4eb1d2317a849d94a\") " pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:16.908536 kubelet[2779]: I1215 09:01:16.908373 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22a7fc3e53836ce4eb1d2317a849d94a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"22a7fc3e53836ce4eb1d2317a849d94a\") " pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:16.908536 kubelet[2779]: I1215 09:01:16.908422 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:16.908536 kubelet[2779]: I1215 09:01:16.908455 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:16.908664 kubelet[2779]: I1215 09:01:16.908481 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 15 09:01:17.127138 kubelet[2779]: E1215 09:01:17.126672 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:17.127138 kubelet[2779]: E1215 09:01:17.126753 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:17.127730 kubelet[2779]: E1215 09:01:17.127712 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:17.690615 kubelet[2779]: I1215 09:01:17.690577 2779 apiserver.go:52] "Watching apiserver" Dec 15 09:01:17.707465 kubelet[2779]: I1215 09:01:17.707401 2779 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 15 09:01:17.732286 kubelet[2779]: I1215 09:01:17.732253 2779 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:17.732599 kubelet[2779]: E1215 09:01:17.732257 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:17.732647 kubelet[2779]: E1215 09:01:17.732601 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:17.747243 kubelet[2779]: E1215 09:01:17.747187 2779 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 15 09:01:17.747499 kubelet[2779]: E1215 09:01:17.747440 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:17.769551 kubelet[2779]: I1215 09:01:17.769493 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.769464338 podStartE2EDuration="2.769464338s" podCreationTimestamp="2025-12-15 09:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 09:01:17.762104443 +0000 UTC m=+1.145876357" watchObservedRunningTime="2025-12-15 09:01:17.769464338 +0000 UTC m=+1.153236252" Dec 15 09:01:17.777355 kubelet[2779]: I1215 09:01:17.777279 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.777258678 podStartE2EDuration="2.777258678s" podCreationTimestamp="2025-12-15 09:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 09:01:17.769464548 +0000 UTC m=+1.153236472" watchObservedRunningTime="2025-12-15 09:01:17.777258678 +0000 UTC m=+1.161030592" Dec 15 09:01:18.734150 kubelet[2779]: E1215 09:01:18.734115 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:18.734566 kubelet[2779]: E1215 09:01:18.734285 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:19.735357 kubelet[2779]: E1215 09:01:19.735310 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:22.216153 kubelet[2779]: E1215 09:01:22.216100 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:22.227172 kubelet[2779]: I1215 09:01:22.227106 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.227074657 podStartE2EDuration="7.227074657s" podCreationTimestamp="2025-12-15 09:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 09:01:17.777569855 +0000 UTC m=+1.161341769" watchObservedRunningTime="2025-12-15 09:01:22.227074657 +0000 UTC m=+5.610846561" Dec 15 09:01:22.501759 kubelet[2779]: I1215 09:01:22.501578 2779 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 15 09:01:22.501949 containerd[1611]: time="2025-12-15T09:01:22.501907245Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 15 09:01:22.502357 kubelet[2779]: I1215 09:01:22.502145 2779 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 15 09:01:22.739785 kubelet[2779]: E1215 09:01:22.739752 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:23.312741 systemd[1]: Created slice kubepods-besteffort-pod1ffa431f_7908_47ce_bfe6_db9f4f1246d7.slice - libcontainer container kubepods-besteffort-pod1ffa431f_7908_47ce_bfe6_db9f4f1246d7.slice. Dec 15 09:01:23.351918 kubelet[2779]: I1215 09:01:23.351867 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ffa431f-7908-47ce-bfe6-db9f4f1246d7-xtables-lock\") pod \"kube-proxy-g48gz\" (UID: \"1ffa431f-7908-47ce-bfe6-db9f4f1246d7\") " pod="kube-system/kube-proxy-g48gz" Dec 15 09:01:23.351918 kubelet[2779]: I1215 09:01:23.351921 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ffa431f-7908-47ce-bfe6-db9f4f1246d7-kube-proxy\") pod \"kube-proxy-g48gz\" (UID: \"1ffa431f-7908-47ce-bfe6-db9f4f1246d7\") " pod="kube-system/kube-proxy-g48gz" Dec 15 09:01:23.351918 kubelet[2779]: I1215 09:01:23.351938 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ffa431f-7908-47ce-bfe6-db9f4f1246d7-lib-modules\") pod \"kube-proxy-g48gz\" (UID: \"1ffa431f-7908-47ce-bfe6-db9f4f1246d7\") " pod="kube-system/kube-proxy-g48gz" Dec 15 09:01:23.352407 kubelet[2779]: I1215 09:01:23.352006 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8dcp\" (UniqueName: \"kubernetes.io/projected/1ffa431f-7908-47ce-bfe6-db9f4f1246d7-kube-api-access-r8dcp\") pod \"kube-proxy-g48gz\" (UID: \"1ffa431f-7908-47ce-bfe6-db9f4f1246d7\") " pod="kube-system/kube-proxy-g48gz" Dec 15 09:01:23.568066 systemd[1]: Created slice kubepods-besteffort-poddc8b67b9_b681_4c7e_9879_0ad7b9bfe9a7.slice - libcontainer container kubepods-besteffort-poddc8b67b9_b681_4c7e_9879_0ad7b9bfe9a7.slice. Dec 15 09:01:23.625396 kubelet[2779]: E1215 09:01:23.625345 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:23.626042 containerd[1611]: time="2025-12-15T09:01:23.625996667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g48gz,Uid:1ffa431f-7908-47ce-bfe6-db9f4f1246d7,Namespace:kube-system,Attempt:0,}" Dec 15 09:01:23.654339 kubelet[2779]: I1215 09:01:23.654298 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc8b67b9-b681-4c7e-9879-0ad7b9bfe9a7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8hfxf\" (UID: \"dc8b67b9-b681-4c7e-9879-0ad7b9bfe9a7\") " pod="tigera-operator/tigera-operator-7dcd859c48-8hfxf" Dec 15 09:01:23.654339 kubelet[2779]: I1215 09:01:23.654334 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zlq\" (UniqueName: \"kubernetes.io/projected/dc8b67b9-b681-4c7e-9879-0ad7b9bfe9a7-kube-api-access-48zlq\") pod \"tigera-operator-7dcd859c48-8hfxf\" (UID: \"dc8b67b9-b681-4c7e-9879-0ad7b9bfe9a7\") " pod="tigera-operator/tigera-operator-7dcd859c48-8hfxf" Dec 15 09:01:23.660510 containerd[1611]: time="2025-12-15T09:01:23.660467577Z" level=info msg="connecting to shim 3a66befb17783eab5acfc2fe06d2d2b408f3e5c1b8ca5fa1f810aeb0a37066c8" address="unix:///run/containerd/s/8d6a8fa619e489c948c4bafdd9a10bbb4574963988f629a1fd323154534cf4fb" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:01:23.666722 kubelet[2779]: E1215 09:01:23.666376 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:23.703975 systemd[1]: Started cri-containerd-3a66befb17783eab5acfc2fe06d2d2b408f3e5c1b8ca5fa1f810aeb0a37066c8.scope - libcontainer container 3a66befb17783eab5acfc2fe06d2d2b408f3e5c1b8ca5fa1f810aeb0a37066c8. Dec 15 09:01:23.714000 audit: BPF prog-id=137 op=LOAD Dec 15 09:01:23.716642 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 15 09:01:23.716693 kernel: audit: type=1334 audit(1765789283.714:438): prog-id=137 op=LOAD Dec 15 09:01:23.715000 audit: BPF prog-id=138 op=LOAD Dec 15 09:01:23.719302 kernel: audit: type=1334 audit(1765789283.715:439): prog-id=138 op=LOAD Dec 15 09:01:23.715000 audit[2850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.724604 kernel: audit: type=1300 audit(1765789283.715:439): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.729694 kernel: audit: type=1327 audit(1765789283.715:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.729752 kernel: audit: type=1334 audit(1765789283.715:440): prog-id=138 op=UNLOAD Dec 15 09:01:23.715000 audit: BPF prog-id=138 op=UNLOAD Dec 15 09:01:23.715000 audit[2850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.736151 kernel: audit: type=1300 audit(1765789283.715:440): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.742829 kernel: audit: type=1327 audit(1765789283.715:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.744418 kubelet[2779]: E1215 09:01:23.744376 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:23.744681 kubelet[2779]: E1215 09:01:23.744656 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:23.715000 audit: BPF prog-id=139 op=LOAD Dec 15 09:01:23.715000 audit[2850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.752844 containerd[1611]: time="2025-12-15T09:01:23.752787904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g48gz,Uid:1ffa431f-7908-47ce-bfe6-db9f4f1246d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a66befb17783eab5acfc2fe06d2d2b408f3e5c1b8ca5fa1f810aeb0a37066c8\"" Dec 15 09:01:23.757710 kernel: audit: type=1334 audit(1765789283.715:441): prog-id=139 op=LOAD Dec 15 09:01:23.757758 kernel: audit: type=1300 audit(1765789283.715:441): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.760522 kubelet[2779]: E1215 09:01:23.760490 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:23.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.768834 kernel: audit: type=1327 audit(1765789283.715:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.715000 audit: BPF prog-id=140 op=LOAD Dec 15 09:01:23.715000 audit[2850]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.715000 audit: BPF prog-id=140 op=UNLOAD Dec 15 09:01:23.715000 audit[2850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.715000 audit: BPF prog-id=139 op=UNLOAD Dec 15 09:01:23.715000 audit[2850]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.715000 audit: BPF prog-id=141 op=LOAD Dec 15 09:01:23.715000 audit[2850]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2839 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363662656662313737383365616235616366633266653036643264 Dec 15 09:01:23.772466 containerd[1611]: time="2025-12-15T09:01:23.772434318Z" level=info msg="CreateContainer within sandbox \"3a66befb17783eab5acfc2fe06d2d2b408f3e5c1b8ca5fa1f810aeb0a37066c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 15 09:01:23.785067 containerd[1611]: time="2025-12-15T09:01:23.785024872Z" level=info msg="Container 61758d813ebcb6bfac59eba11f1f7e1f3657fedf2ead7f212d621419422d3ce7: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:23.794085 containerd[1611]: time="2025-12-15T09:01:23.794044526Z" level=info msg="CreateContainer within sandbox \"3a66befb17783eab5acfc2fe06d2d2b408f3e5c1b8ca5fa1f810aeb0a37066c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"61758d813ebcb6bfac59eba11f1f7e1f3657fedf2ead7f212d621419422d3ce7\"" Dec 15 09:01:23.794666 containerd[1611]: time="2025-12-15T09:01:23.794626304Z" level=info msg="StartContainer for \"61758d813ebcb6bfac59eba11f1f7e1f3657fedf2ead7f212d621419422d3ce7\"" Dec 15 09:01:23.796054 containerd[1611]: time="2025-12-15T09:01:23.796028498Z" level=info msg="connecting to shim 61758d813ebcb6bfac59eba11f1f7e1f3657fedf2ead7f212d621419422d3ce7" address="unix:///run/containerd/s/8d6a8fa619e489c948c4bafdd9a10bbb4574963988f629a1fd323154534cf4fb" protocol=ttrpc version=3 Dec 15 09:01:23.816972 systemd[1]: Started cri-containerd-61758d813ebcb6bfac59eba11f1f7e1f3657fedf2ead7f212d621419422d3ce7.scope - libcontainer container 61758d813ebcb6bfac59eba11f1f7e1f3657fedf2ead7f212d621419422d3ce7. Dec 15 09:01:23.872196 containerd[1611]: time="2025-12-15T09:01:23.872087276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8hfxf,Uid:dc8b67b9-b681-4c7e-9879-0ad7b9bfe9a7,Namespace:tigera-operator,Attempt:0,}" Dec 15 09:01:23.881000 audit: BPF prog-id=142 op=LOAD Dec 15 09:01:23.881000 audit[2878]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2839 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631373538643831336562636236626661633539656261313166316637 Dec 15 09:01:23.881000 audit: BPF prog-id=143 op=LOAD Dec 15 09:01:23.881000 audit[2878]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2839 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631373538643831336562636236626661633539656261313166316637 Dec 15 09:01:23.881000 audit: BPF prog-id=143 op=UNLOAD Dec 15 09:01:23.881000 audit[2878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2839 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631373538643831336562636236626661633539656261313166316637 Dec 15 09:01:23.881000 audit: BPF prog-id=142 op=UNLOAD Dec 15 09:01:23.881000 audit[2878]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2839 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631373538643831336562636236626661633539656261313166316637 Dec 15 09:01:23.881000 audit: BPF prog-id=144 op=LOAD Dec 15 09:01:23.881000 audit[2878]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2839 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631373538643831336562636236626661633539656261313166316637 Dec 15 09:01:23.893383 containerd[1611]: time="2025-12-15T09:01:23.893303176Z" level=info msg="connecting to shim 154339e58e363900d952191a82814f9884c2d3ee3ea8aefc8cbcd4e463919328" address="unix:///run/containerd/s/9071ca5058f5cdb1e56a8d04137ea35a7b5b0a6956ad6d80c967a88fe35cf435" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:01:23.910310 containerd[1611]: time="2025-12-15T09:01:23.910255101Z" level=info msg="StartContainer for \"61758d813ebcb6bfac59eba11f1f7e1f3657fedf2ead7f212d621419422d3ce7\" returns successfully" Dec 15 09:01:23.918970 systemd[1]: Started cri-containerd-154339e58e363900d952191a82814f9884c2d3ee3ea8aefc8cbcd4e463919328.scope - libcontainer container 154339e58e363900d952191a82814f9884c2d3ee3ea8aefc8cbcd4e463919328. Dec 15 09:01:23.930000 audit: BPF prog-id=145 op=LOAD Dec 15 09:01:23.930000 audit: BPF prog-id=146 op=LOAD Dec 15 09:01:23.930000 audit[2919]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2906 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343333396535386533363339303064393532313931613832383134 Dec 15 09:01:23.930000 audit: BPF prog-id=146 op=UNLOAD Dec 15 09:01:23.930000 audit[2919]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343333396535386533363339303064393532313931613832383134 Dec 15 09:01:23.930000 audit: BPF prog-id=147 op=LOAD Dec 15 09:01:23.930000 audit[2919]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2906 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343333396535386533363339303064393532313931613832383134 Dec 15 09:01:23.930000 audit: BPF prog-id=148 op=LOAD Dec 15 09:01:23.930000 audit[2919]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2906 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343333396535386533363339303064393532313931613832383134 Dec 15 09:01:23.930000 audit: BPF prog-id=148 op=UNLOAD Dec 15 09:01:23.930000 audit[2919]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343333396535386533363339303064393532313931613832383134 Dec 15 09:01:23.930000 audit: BPF prog-id=147 op=UNLOAD Dec 15 09:01:23.930000 audit[2919]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343333396535386533363339303064393532313931613832383134 Dec 15 09:01:23.930000 audit: BPF prog-id=149 op=LOAD Dec 15 09:01:23.930000 audit[2919]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2906 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:23.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343333396535386533363339303064393532313931613832383134 Dec 15 09:01:23.962112 containerd[1611]: time="2025-12-15T09:01:23.962053891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8hfxf,Uid:dc8b67b9-b681-4c7e-9879-0ad7b9bfe9a7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"154339e58e363900d952191a82814f9884c2d3ee3ea8aefc8cbcd4e463919328\"" Dec 15 09:01:23.963448 containerd[1611]: time="2025-12-15T09:01:23.963388375Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 15 09:01:24.037000 audit[2988]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.037000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff422c1f70 a2=0 a3=7fff422c1f5c items=0 ppid=2893 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 15 09:01:24.038000 audit[2989]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.038000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1f3bb010 a2=0 a3=7fff1f3baffc items=0 ppid=2893 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 15 09:01:24.039000 audit[2991]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.039000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb4552320 a2=0 a3=7ffdb455230c items=0 ppid=2893 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 15 09:01:24.039000 audit[2990]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.039000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3e717a60 a2=0 a3=7ffd3e717a4c items=0 ppid=2893 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 15 09:01:24.040000 audit[2992]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.040000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd40eecaf0 a2=0 a3=7ffd40eecadc items=0 ppid=2893 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 15 09:01:24.044000 audit[2995]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.044000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc50bab670 a2=0 a3=7ffc50bab65c items=0 ppid=2893 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 15 09:01:24.146000 audit[2997]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.146000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffda7dd3560 a2=0 a3=7ffda7dd354c items=0 ppid=2893 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 15 09:01:24.149000 audit[2999]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.149000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff74ae3ec0 a2=0 a3=7fff74ae3eac items=0 ppid=2893 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 15 09:01:24.154000 audit[3002]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.154000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc174cf600 a2=0 a3=7ffc174cf5ec items=0 ppid=2893 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 15 09:01:24.155000 audit[3003]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.155000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0946df10 a2=0 a3=7ffc0946defc items=0 ppid=2893 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 15 09:01:24.158000 audit[3005]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.158000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc30c99b70 a2=0 a3=7ffc30c99b5c items=0 ppid=2893 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 15 09:01:24.160000 audit[3006]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.160000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe751e7fe0 a2=0 a3=7ffe751e7fcc items=0 ppid=2893 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.160000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 15 09:01:24.163000 audit[3008]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.163000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffd30d3780 a2=0 a3=7fffd30d376c items=0 ppid=2893 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 15 09:01:24.168000 audit[3011]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.168000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffec29f42e0 a2=0 a3=7ffec29f42cc items=0 ppid=2893 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.168000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 15 09:01:24.169000 audit[3012]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.169000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd437d3a40 a2=0 a3=7ffd437d3a2c items=0 ppid=2893 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 15 09:01:24.172000 audit[3014]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.172000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff6e50c140 a2=0 a3=7fff6e50c12c items=0 ppid=2893 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 15 09:01:24.173000 audit[3015]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.173000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8767c250 a2=0 a3=7ffc8767c23c items=0 ppid=2893 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 15 09:01:24.176000 audit[3017]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.176000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd2616a10 a2=0 a3=7fffd26169fc items=0 ppid=2893 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 15 09:01:24.181000 audit[3020]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.181000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3918b320 a2=0 a3=7fff3918b30c items=0 ppid=2893 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 15 09:01:24.185000 audit[3023]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.185000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe8ee7420 a2=0 a3=7fffe8ee740c items=0 ppid=2893 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 15 09:01:24.186000 audit[3024]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.186000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbdcc6ae0 a2=0 a3=7fffbdcc6acc items=0 ppid=2893 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 15 09:01:24.189000 audit[3026]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.189000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd29272530 a2=0 a3=7ffd2927251c items=0 ppid=2893 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 15 09:01:24.194000 audit[3029]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.194000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdb3e778d0 a2=0 a3=7ffdb3e778bc items=0 ppid=2893 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 15 09:01:24.195000 audit[3030]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.195000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe03f80210 a2=0 a3=7ffe03f801fc items=0 ppid=2893 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 15 09:01:24.199000 audit[3032]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 15 09:01:24.199000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff4c17acb0 a2=0 a3=7fff4c17ac9c items=0 ppid=2893 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 15 09:01:24.221000 audit[3038]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:24.221000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd1c43e220 a2=0 a3=7ffd1c43e20c items=0 ppid=2893 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:24.241000 audit[3038]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:24.241000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd1c43e220 a2=0 a3=7ffd1c43e20c items=0 ppid=2893 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:24.242000 audit[3044]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.242000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd978318f0 a2=0 a3=7ffd978318dc items=0 ppid=2893 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 15 09:01:24.246000 audit[3046]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.246000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe0f71c030 a2=0 a3=7ffe0f71c01c items=0 ppid=2893 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 15 09:01:24.250000 audit[3049]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.250000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd36058080 a2=0 a3=7ffd3605806c items=0 ppid=2893 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 15 09:01:24.252000 audit[3050]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.252000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9f80dd30 a2=0 a3=7ffd9f80dd1c items=0 ppid=2893 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 15 09:01:24.255000 audit[3052]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.255000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdeb601da0 a2=0 a3=7ffdeb601d8c items=0 ppid=2893 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.255000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 15 09:01:24.256000 audit[3053]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.256000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc56c5fcd0 a2=0 a3=7ffc56c5fcbc items=0 ppid=2893 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 15 09:01:24.259000 audit[3055]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.259000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdceacf580 a2=0 a3=7ffdceacf56c items=0 ppid=2893 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 15 09:01:24.264000 audit[3058]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.264000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffff2555720 a2=0 a3=7ffff255570c items=0 ppid=2893 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 15 09:01:24.265000 audit[3059]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.265000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd75321330 a2=0 a3=7ffd7532131c items=0 ppid=2893 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 15 09:01:24.268000 audit[3061]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.268000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe1b7eb560 a2=0 a3=7ffe1b7eb54c items=0 ppid=2893 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 15 09:01:24.270000 audit[3062]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.270000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd4a9de40 a2=0 a3=7ffdd4a9de2c items=0 ppid=2893 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 15 09:01:24.273000 audit[3064]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.273000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf1e855b0 a2=0 a3=7ffdf1e8559c items=0 ppid=2893 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 15 09:01:24.277000 audit[3067]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.277000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb86456b0 a2=0 a3=7fffb864569c items=0 ppid=2893 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 15 09:01:24.282000 audit[3070]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.282000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec4dd3fc0 a2=0 a3=7ffec4dd3fac items=0 ppid=2893 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 15 09:01:24.284000 audit[3071]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.284000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd97c44fe0 a2=0 a3=7ffd97c44fcc items=0 ppid=2893 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 15 09:01:24.287000 audit[3073]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.287000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcdfa3ac50 a2=0 a3=7ffcdfa3ac3c items=0 ppid=2893 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 15 09:01:24.292000 audit[3076]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.292000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffecf1a4e70 a2=0 a3=7ffecf1a4e5c items=0 ppid=2893 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 15 09:01:24.294000 audit[3077]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.294000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddeef8400 a2=0 a3=7ffddeef83ec items=0 ppid=2893 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 15 09:01:24.296000 audit[3079]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.296000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc2951eec0 a2=0 a3=7ffc2951eeac items=0 ppid=2893 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 15 09:01:24.298000 audit[3080]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.298000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce8b990f0 a2=0 a3=7ffce8b990dc items=0 ppid=2893 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 15 09:01:24.301000 audit[3082]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.301000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff3fffbae0 a2=0 a3=7fff3fffbacc items=0 ppid=2893 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 15 09:01:24.305000 audit[3085]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 15 09:01:24.305000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd906d0010 a2=0 a3=7ffd906cfffc items=0 ppid=2893 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 15 09:01:24.309000 audit[3087]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 15 09:01:24.309000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffeba505dd0 a2=0 a3=7ffeba505dbc items=0 ppid=2893 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.309000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:24.309000 audit[3087]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 15 09:01:24.309000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffeba505dd0 a2=0 a3=7ffeba505dbc items=0 ppid=2893 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:24.309000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:24.463190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818154206.mount: Deactivated successfully. Dec 15 09:01:24.748778 kubelet[2779]: E1215 09:01:24.748632 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:24.758426 kubelet[2779]: I1215 09:01:24.758350 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g48gz" podStartSLOduration=1.758332426 podStartE2EDuration="1.758332426s" podCreationTimestamp="2025-12-15 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 09:01:24.757632623 +0000 UTC m=+8.141404537" watchObservedRunningTime="2025-12-15 09:01:24.758332426 +0000 UTC m=+8.142104340" Dec 15 09:01:25.479950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182781621.mount: Deactivated successfully. Dec 15 09:01:25.980090 containerd[1611]: time="2025-12-15T09:01:25.980001992Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:25.980796 containerd[1611]: time="2025-12-15T09:01:25.980755888Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 15 09:01:25.981876 containerd[1611]: time="2025-12-15T09:01:25.981848842Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:25.984207 containerd[1611]: time="2025-12-15T09:01:25.984152228Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:25.984793 containerd[1611]: time="2025-12-15T09:01:25.984762056Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.021325168s" Dec 15 09:01:25.984845 containerd[1611]: time="2025-12-15T09:01:25.984795550Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 15 09:01:25.989502 containerd[1611]: time="2025-12-15T09:01:25.989465932Z" level=info msg="CreateContainer within sandbox \"154339e58e363900d952191a82814f9884c2d3ee3ea8aefc8cbcd4e463919328\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 15 09:01:25.999868 containerd[1611]: time="2025-12-15T09:01:25.999833925Z" level=info msg="Container 4b42f595536e18aed342f82e412fb61fa534d52053155bca391f47fe61bc41db: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:26.007993 containerd[1611]: time="2025-12-15T09:01:26.007955017Z" level=info msg="CreateContainer within sandbox \"154339e58e363900d952191a82814f9884c2d3ee3ea8aefc8cbcd4e463919328\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4b42f595536e18aed342f82e412fb61fa534d52053155bca391f47fe61bc41db\"" Dec 15 09:01:26.009503 containerd[1611]: time="2025-12-15T09:01:26.008445505Z" level=info msg="StartContainer for \"4b42f595536e18aed342f82e412fb61fa534d52053155bca391f47fe61bc41db\"" Dec 15 09:01:26.009503 containerd[1611]: time="2025-12-15T09:01:26.009263942Z" level=info msg="connecting to shim 4b42f595536e18aed342f82e412fb61fa534d52053155bca391f47fe61bc41db" address="unix:///run/containerd/s/9071ca5058f5cdb1e56a8d04137ea35a7b5b0a6956ad6d80c967a88fe35cf435" protocol=ttrpc version=3 Dec 15 09:01:26.038976 systemd[1]: Started cri-containerd-4b42f595536e18aed342f82e412fb61fa534d52053155bca391f47fe61bc41db.scope - libcontainer container 4b42f595536e18aed342f82e412fb61fa534d52053155bca391f47fe61bc41db. Dec 15 09:01:26.050000 audit: BPF prog-id=150 op=LOAD Dec 15 09:01:26.050000 audit: BPF prog-id=151 op=LOAD Dec 15 09:01:26.050000 audit[3096]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2906 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:26.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462343266353935353336653138616564333432663832653431326662 Dec 15 09:01:26.051000 audit: BPF prog-id=151 op=UNLOAD Dec 15 09:01:26.051000 audit[3096]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:26.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462343266353935353336653138616564333432663832653431326662 Dec 15 09:01:26.051000 audit: BPF prog-id=152 op=LOAD Dec 15 09:01:26.051000 audit[3096]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2906 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:26.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462343266353935353336653138616564333432663832653431326662 Dec 15 09:01:26.051000 audit: BPF prog-id=153 op=LOAD Dec 15 09:01:26.051000 audit[3096]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2906 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:26.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462343266353935353336653138616564333432663832653431326662 Dec 15 09:01:26.051000 audit: BPF prog-id=153 op=UNLOAD Dec 15 09:01:26.051000 audit[3096]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:26.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462343266353935353336653138616564333432663832653431326662 Dec 15 09:01:26.051000 audit: BPF prog-id=152 op=UNLOAD Dec 15 09:01:26.051000 audit[3096]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2906 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:26.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462343266353935353336653138616564333432663832653431326662 Dec 15 09:01:26.051000 audit: BPF prog-id=154 op=LOAD Dec 15 09:01:26.051000 audit[3096]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2906 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:26.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462343266353935353336653138616564333432663832653431326662 Dec 15 09:01:26.068232 containerd[1611]: time="2025-12-15T09:01:26.068184129Z" level=info msg="StartContainer for \"4b42f595536e18aed342f82e412fb61fa534d52053155bca391f47fe61bc41db\" returns successfully" Dec 15 09:01:26.759900 kubelet[2779]: I1215 09:01:26.759843 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8hfxf" podStartSLOduration=1.737299097 podStartE2EDuration="3.759788237s" podCreationTimestamp="2025-12-15 09:01:23 +0000 UTC" firstStartedPulling="2025-12-15 09:01:23.963127604 +0000 UTC m=+7.346899518" lastFinishedPulling="2025-12-15 09:01:25.985616744 +0000 UTC m=+9.369388658" observedRunningTime="2025-12-15 09:01:26.759505676 +0000 UTC m=+10.143277600" watchObservedRunningTime="2025-12-15 09:01:26.759788237 +0000 UTC m=+10.143560151" Dec 15 09:01:28.645827 kubelet[2779]: E1215 09:01:28.645046 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:28.755209 kubelet[2779]: E1215 09:01:28.755177 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:31.040776 sudo[1831]: pam_unix(sudo:session): session closed for user root Dec 15 09:01:31.047824 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 15 09:01:31.047946 kernel: audit: type=1106 audit(1765789291.040:518): pid=1831 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:01:31.047972 kernel: audit: type=1104 audit(1765789291.040:519): pid=1831 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:01:31.040000 audit[1831]: USER_END pid=1831 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:01:31.040000 audit[1831]: CRED_DISP pid=1831 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 15 09:01:31.048137 sshd[1830]: Connection closed by 10.0.0.1 port 37452 Dec 15 09:01:31.046505 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Dec 15 09:01:31.046000 audit[1826]: USER_END pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:31.056410 kernel: audit: type=1106 audit(1765789291.046:520): pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:31.060822 kernel: audit: type=1104 audit(1765789291.046:521): pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:31.046000 audit[1826]: CRED_DISP pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:31.057504 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Dec 15 09:01:31.062480 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:37452.service: Deactivated successfully. Dec 15 09:01:31.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.128:22-10.0.0.1:37452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:31.070151 kernel: audit: type=1131 audit(1765789291.064:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.128:22-10.0.0.1:37452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:31.071760 systemd[1]: session-8.scope: Deactivated successfully. Dec 15 09:01:31.072900 systemd[1]: session-8.scope: Consumed 5.364s CPU time, 190.5M memory peak. Dec 15 09:01:31.076736 systemd-logind[1586]: Removed session 8. Dec 15 09:01:31.638000 audit[3188]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:31.638000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffce64d43c0 a2=0 a3=7ffce64d43ac items=0 ppid=2893 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:31.649624 kernel: audit: type=1325 audit(1765789291.638:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:31.649688 kernel: audit: type=1300 audit(1765789291.638:523): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffce64d43c0 a2=0 a3=7ffce64d43ac items=0 ppid=2893 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:31.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:31.651000 audit[3188]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:31.656145 kernel: audit: type=1327 audit(1765789291.638:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:31.656204 kernel: audit: type=1325 audit(1765789291.651:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:31.656229 kernel: audit: type=1300 audit(1765789291.651:524): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce64d43c0 a2=0 a3=0 items=0 ppid=2893 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:31.651000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce64d43c0 a2=0 a3=0 items=0 ppid=2893 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:31.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:31.677000 audit[3190]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:31.677000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd463ffd0 a2=0 a3=7ffdd463ffbc items=0 ppid=2893 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:31.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:31.684000 audit[3190]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:31.684000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd463ffd0 a2=0 a3=0 items=0 ppid=2893 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:31.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:32.919498 update_engine[1589]: I20251215 09:01:32.919405 1589 update_attempter.cc:509] Updating boot flags... Dec 15 09:01:33.444000 audit[3211]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:33.444000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdfbc2de70 a2=0 a3=7ffdfbc2de5c items=0 ppid=2893 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:33.444000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:33.468000 audit[3211]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:33.468000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdfbc2de70 a2=0 a3=0 items=0 ppid=2893 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:33.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:34.490000 audit[3213]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:34.490000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee5c91890 a2=0 a3=7ffee5c9187c items=0 ppid=2893 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:34.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:34.498000 audit[3213]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:34.498000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffee5c91890 a2=0 a3=0 items=0 ppid=2893 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:34.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:35.054000 audit[3215]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:35.054000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeabeb6a60 a2=0 a3=7ffeabeb6a4c items=0 ppid=2893 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:35.061000 audit[3215]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:35.061000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeabeb6a60 a2=0 a3=0 items=0 ppid=2893 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:35.148334 systemd[1]: Created slice kubepods-besteffort-podf20e864b_b1ff_46a3_98ee_ca22c3080b90.slice - libcontainer container kubepods-besteffort-podf20e864b_b1ff_46a3_98ee_ca22c3080b90.slice. Dec 15 09:01:35.233507 kubelet[2779]: I1215 09:01:35.233424 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f20e864b-b1ff-46a3-98ee-ca22c3080b90-typha-certs\") pod \"calico-typha-bf54fd9b7-dqrqd\" (UID: \"f20e864b-b1ff-46a3-98ee-ca22c3080b90\") " pod="calico-system/calico-typha-bf54fd9b7-dqrqd" Dec 15 09:01:35.233507 kubelet[2779]: I1215 09:01:35.233484 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqqc\" (UniqueName: \"kubernetes.io/projected/f20e864b-b1ff-46a3-98ee-ca22c3080b90-kube-api-access-6xqqc\") pod \"calico-typha-bf54fd9b7-dqrqd\" (UID: \"f20e864b-b1ff-46a3-98ee-ca22c3080b90\") " pod="calico-system/calico-typha-bf54fd9b7-dqrqd" Dec 15 09:01:35.233507 kubelet[2779]: I1215 09:01:35.233508 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f20e864b-b1ff-46a3-98ee-ca22c3080b90-tigera-ca-bundle\") pod \"calico-typha-bf54fd9b7-dqrqd\" (UID: \"f20e864b-b1ff-46a3-98ee-ca22c3080b90\") " pod="calico-system/calico-typha-bf54fd9b7-dqrqd" Dec 15 09:01:35.271481 systemd[1]: Created slice kubepods-besteffort-pod1c680467_02bd_473f_9d40_4266a611eafe.slice - libcontainer container kubepods-besteffort-pod1c680467_02bd_473f_9d40_4266a611eafe.slice. Dec 15 09:01:35.333988 kubelet[2779]: I1215 09:01:35.333852 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-var-run-calico\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.333988 kubelet[2779]: I1215 09:01:35.333899 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnsn\" (UniqueName: \"kubernetes.io/projected/1c680467-02bd-473f-9d40-4266a611eafe-kube-api-access-5vnsn\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.333988 kubelet[2779]: I1215 09:01:35.333927 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-policysync\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334174 kubelet[2779]: I1215 09:01:35.334035 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-cni-bin-dir\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334174 kubelet[2779]: I1215 09:01:35.334088 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-flexvol-driver-host\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334174 kubelet[2779]: I1215 09:01:35.334109 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-var-lib-calico\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334174 kubelet[2779]: I1215 09:01:35.334123 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c680467-02bd-473f-9d40-4266a611eafe-tigera-ca-bundle\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334174 kubelet[2779]: I1215 09:01:35.334142 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-cni-log-dir\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334365 kubelet[2779]: I1215 09:01:35.334179 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-lib-modules\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334365 kubelet[2779]: I1215 09:01:35.334193 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-xtables-lock\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334365 kubelet[2779]: I1215 09:01:35.334210 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1c680467-02bd-473f-9d40-4266a611eafe-cni-net-dir\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.334365 kubelet[2779]: I1215 09:01:35.334226 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1c680467-02bd-473f-9d40-4266a611eafe-node-certs\") pod \"calico-node-zr66q\" (UID: \"1c680467-02bd-473f-9d40-4266a611eafe\") " pod="calico-system/calico-node-zr66q" Dec 15 09:01:35.439390 kubelet[2779]: E1215 09:01:35.439314 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.439390 kubelet[2779]: W1215 09:01:35.439339 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.439637 kubelet[2779]: E1215 09:01:35.439465 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.443673 kubelet[2779]: E1215 09:01:35.443615 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.443673 kubelet[2779]: W1215 09:01:35.443648 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.443673 kubelet[2779]: E1215 09:01:35.443671 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.449838 kubelet[2779]: E1215 09:01:35.448767 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.449838 kubelet[2779]: W1215 09:01:35.448780 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.449838 kubelet[2779]: E1215 09:01:35.448791 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.455227 kubelet[2779]: E1215 09:01:35.455160 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:35.455827 containerd[1611]: time="2025-12-15T09:01:35.455748815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf54fd9b7-dqrqd,Uid:f20e864b-b1ff-46a3-98ee-ca22c3080b90,Namespace:calico-system,Attempt:0,}" Dec 15 09:01:35.470001 kubelet[2779]: E1215 09:01:35.469873 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:01:35.479939 containerd[1611]: time="2025-12-15T09:01:35.479882790Z" level=info msg="connecting to shim ed185abe8fdd67113dcce57afccaa327f7bf80d26e292efac24f57a28ee78bb9" address="unix:///run/containerd/s/244983e1ab10ffba647ead04a0a56d3c789f4cf77d1b592deb6adb8815623cc3" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:01:35.511767 kubelet[2779]: E1215 09:01:35.511638 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.511767 kubelet[2779]: W1215 09:01:35.511660 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.511767 kubelet[2779]: E1215 09:01:35.511680 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.511993 kubelet[2779]: E1215 09:01:35.511956 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.511993 kubelet[2779]: W1215 09:01:35.511984 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.512050 kubelet[2779]: E1215 09:01:35.512013 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.512329 kubelet[2779]: E1215 09:01:35.512304 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.512370 kubelet[2779]: W1215 09:01:35.512318 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.512370 kubelet[2779]: E1215 09:01:35.512348 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.512673 kubelet[2779]: E1215 09:01:35.512656 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.512731 kubelet[2779]: W1215 09:01:35.512721 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.512788 kubelet[2779]: E1215 09:01:35.512777 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.513457 kubelet[2779]: E1215 09:01:35.513037 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.513457 kubelet[2779]: W1215 09:01:35.513048 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.513457 kubelet[2779]: E1215 09:01:35.513058 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.513056 systemd[1]: Started cri-containerd-ed185abe8fdd67113dcce57afccaa327f7bf80d26e292efac24f57a28ee78bb9.scope - libcontainer container ed185abe8fdd67113dcce57afccaa327f7bf80d26e292efac24f57a28ee78bb9. Dec 15 09:01:35.514121 kubelet[2779]: E1215 09:01:35.514008 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.514121 kubelet[2779]: W1215 09:01:35.514022 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.514121 kubelet[2779]: E1215 09:01:35.514031 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.514271 kubelet[2779]: E1215 09:01:35.514260 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.514321 kubelet[2779]: W1215 09:01:35.514310 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.514366 kubelet[2779]: E1215 09:01:35.514357 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.514634 kubelet[2779]: E1215 09:01:35.514622 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.514688 kubelet[2779]: W1215 09:01:35.514678 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.514733 kubelet[2779]: E1215 09:01:35.514724 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.515070 kubelet[2779]: E1215 09:01:35.514974 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.515070 kubelet[2779]: W1215 09:01:35.514984 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.515070 kubelet[2779]: E1215 09:01:35.514993 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.515217 kubelet[2779]: E1215 09:01:35.515206 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.515263 kubelet[2779]: W1215 09:01:35.515253 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.515308 kubelet[2779]: E1215 09:01:35.515299 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.515531 kubelet[2779]: E1215 09:01:35.515519 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.515668 kubelet[2779]: W1215 09:01:35.515577 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.515668 kubelet[2779]: E1215 09:01:35.515589 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.515847 kubelet[2779]: E1215 09:01:35.515784 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.515847 kubelet[2779]: W1215 09:01:35.515828 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.515902 kubelet[2779]: E1215 09:01:35.515852 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.516175 kubelet[2779]: E1215 09:01:35.516157 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.516175 kubelet[2779]: W1215 09:01:35.516170 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.516231 kubelet[2779]: E1215 09:01:35.516180 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.516499 kubelet[2779]: E1215 09:01:35.516383 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.516499 kubelet[2779]: W1215 09:01:35.516396 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.516499 kubelet[2779]: E1215 09:01:35.516406 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.516633 kubelet[2779]: E1215 09:01:35.516615 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.516633 kubelet[2779]: W1215 09:01:35.516629 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.516682 kubelet[2779]: E1215 09:01:35.516642 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.516864 kubelet[2779]: E1215 09:01:35.516849 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.516864 kubelet[2779]: W1215 09:01:35.516859 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.516924 kubelet[2779]: E1215 09:01:35.516868 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.517061 kubelet[2779]: E1215 09:01:35.517046 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.517061 kubelet[2779]: W1215 09:01:35.517056 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.517114 kubelet[2779]: E1215 09:01:35.517064 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.517303 kubelet[2779]: E1215 09:01:35.517243 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.517303 kubelet[2779]: W1215 09:01:35.517257 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.517303 kubelet[2779]: E1215 09:01:35.517264 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.517498 kubelet[2779]: E1215 09:01:35.517464 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.517498 kubelet[2779]: W1215 09:01:35.517487 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.517498 kubelet[2779]: E1215 09:01:35.517497 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.517742 kubelet[2779]: E1215 09:01:35.517727 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.517742 kubelet[2779]: W1215 09:01:35.517736 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.517742 kubelet[2779]: E1215 09:01:35.517744 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.526000 audit: BPF prog-id=155 op=LOAD Dec 15 09:01:35.527000 audit: BPF prog-id=156 op=LOAD Dec 15 09:01:35.527000 audit[3256]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3239 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564313835616265386664643637313133646363653537616663636161 Dec 15 09:01:35.527000 audit: BPF prog-id=156 op=UNLOAD Dec 15 09:01:35.527000 audit[3256]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3239 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564313835616265386664643637313133646363653537616663636161 Dec 15 09:01:35.527000 audit: BPF prog-id=157 op=LOAD Dec 15 09:01:35.527000 audit[3256]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3239 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564313835616265386664643637313133646363653537616663636161 Dec 15 09:01:35.527000 audit: BPF prog-id=158 op=LOAD Dec 15 09:01:35.527000 audit[3256]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3239 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564313835616265386664643637313133646363653537616663636161 Dec 15 09:01:35.527000 audit: BPF prog-id=158 op=UNLOAD Dec 15 09:01:35.527000 audit[3256]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3239 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564313835616265386664643637313133646363653537616663636161 Dec 15 09:01:35.527000 audit: BPF prog-id=157 op=UNLOAD Dec 15 09:01:35.527000 audit[3256]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3239 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564313835616265386664643637313133646363653537616663636161 Dec 15 09:01:35.527000 audit: BPF prog-id=159 op=LOAD Dec 15 09:01:35.527000 audit[3256]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3239 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564313835616265386664643637313133646363653537616663636161 Dec 15 09:01:35.536742 kubelet[2779]: E1215 09:01:35.536688 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.536742 kubelet[2779]: W1215 09:01:35.536730 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.536843 kubelet[2779]: E1215 09:01:35.536752 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.536843 kubelet[2779]: I1215 09:01:35.536822 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47f37f66-a36c-44b9-8447-de6c1cff5809-kubelet-dir\") pod \"csi-node-driver-qg84l\" (UID: \"47f37f66-a36c-44b9-8447-de6c1cff5809\") " pod="calico-system/csi-node-driver-qg84l" Dec 15 09:01:35.537072 kubelet[2779]: E1215 09:01:35.537045 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.537072 kubelet[2779]: W1215 09:01:35.537060 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.537072 kubelet[2779]: E1215 09:01:35.537069 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.537207 kubelet[2779]: I1215 09:01:35.537093 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47f37f66-a36c-44b9-8447-de6c1cff5809-registration-dir\") pod \"csi-node-driver-qg84l\" (UID: \"47f37f66-a36c-44b9-8447-de6c1cff5809\") " pod="calico-system/csi-node-driver-qg84l" Dec 15 09:01:35.537890 kubelet[2779]: E1215 09:01:35.537869 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.537890 kubelet[2779]: W1215 09:01:35.537887 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.537951 kubelet[2779]: E1215 09:01:35.537901 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.538150 kubelet[2779]: E1215 09:01:35.538124 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.538150 kubelet[2779]: W1215 09:01:35.538139 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.538211 kubelet[2779]: E1215 09:01:35.538151 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.538382 kubelet[2779]: E1215 09:01:35.538357 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.538382 kubelet[2779]: W1215 09:01:35.538372 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.538438 kubelet[2779]: E1215 09:01:35.538383 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.538438 kubelet[2779]: I1215 09:01:35.538408 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47f37f66-a36c-44b9-8447-de6c1cff5809-socket-dir\") pod \"csi-node-driver-qg84l\" (UID: \"47f37f66-a36c-44b9-8447-de6c1cff5809\") " pod="calico-system/csi-node-driver-qg84l" Dec 15 09:01:35.538639 kubelet[2779]: E1215 09:01:35.538621 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.538639 kubelet[2779]: W1215 09:01:35.538635 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.538692 kubelet[2779]: E1215 09:01:35.538645 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.538692 kubelet[2779]: I1215 09:01:35.538675 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/47f37f66-a36c-44b9-8447-de6c1cff5809-varrun\") pod \"csi-node-driver-qg84l\" (UID: \"47f37f66-a36c-44b9-8447-de6c1cff5809\") " pod="calico-system/csi-node-driver-qg84l" Dec 15 09:01:35.539981 kubelet[2779]: E1215 09:01:35.539594 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.539981 kubelet[2779]: W1215 09:01:35.539610 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.539981 kubelet[2779]: E1215 09:01:35.539619 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.540322 kubelet[2779]: E1215 09:01:35.540288 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.540322 kubelet[2779]: W1215 09:01:35.540305 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.540322 kubelet[2779]: E1215 09:01:35.540315 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.541148 kubelet[2779]: E1215 09:01:35.541121 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.541148 kubelet[2779]: W1215 09:01:35.541135 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.541148 kubelet[2779]: E1215 09:01:35.541145 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.541298 kubelet[2779]: I1215 09:01:35.541243 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfccg\" (UniqueName: \"kubernetes.io/projected/47f37f66-a36c-44b9-8447-de6c1cff5809-kube-api-access-dfccg\") pod \"csi-node-driver-qg84l\" (UID: \"47f37f66-a36c-44b9-8447-de6c1cff5809\") " pod="calico-system/csi-node-driver-qg84l" Dec 15 09:01:35.541567 kubelet[2779]: E1215 09:01:35.541548 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.541567 kubelet[2779]: W1215 09:01:35.541560 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.541567 kubelet[2779]: E1215 09:01:35.541569 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.542425 kubelet[2779]: E1215 09:01:35.542387 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.542425 kubelet[2779]: W1215 09:01:35.542400 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.542425 kubelet[2779]: E1215 09:01:35.542410 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.543163 kubelet[2779]: E1215 09:01:35.543130 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.543163 kubelet[2779]: W1215 09:01:35.543145 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.543163 kubelet[2779]: E1215 09:01:35.543155 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.543379 kubelet[2779]: E1215 09:01:35.543352 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.543379 kubelet[2779]: W1215 09:01:35.543364 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.543379 kubelet[2779]: E1215 09:01:35.543373 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.543608 kubelet[2779]: E1215 09:01:35.543579 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.543608 kubelet[2779]: W1215 09:01:35.543591 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.543608 kubelet[2779]: E1215 09:01:35.543598 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.543838 kubelet[2779]: E1215 09:01:35.543795 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.543838 kubelet[2779]: W1215 09:01:35.543829 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.543838 kubelet[2779]: E1215 09:01:35.543837 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.573736 containerd[1611]: time="2025-12-15T09:01:35.573601393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf54fd9b7-dqrqd,Uid:f20e864b-b1ff-46a3-98ee-ca22c3080b90,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed185abe8fdd67113dcce57afccaa327f7bf80d26e292efac24f57a28ee78bb9\"" Dec 15 09:01:35.574609 kubelet[2779]: E1215 09:01:35.574550 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:35.575156 kubelet[2779]: E1215 09:01:35.575136 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:35.575948 containerd[1611]: time="2025-12-15T09:01:35.575756107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zr66q,Uid:1c680467-02bd-473f-9d40-4266a611eafe,Namespace:calico-system,Attempt:0,}" Dec 15 09:01:35.577228 containerd[1611]: time="2025-12-15T09:01:35.577191293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 15 09:01:35.625670 containerd[1611]: time="2025-12-15T09:01:35.625526630Z" level=info msg="connecting to shim 7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b" address="unix:///run/containerd/s/933788b51f8abc6b906b77c2c686e4d71a30f983c1ebd3310232a19f682d2aff" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:01:35.643517 kubelet[2779]: E1215 09:01:35.643418 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.643517 kubelet[2779]: W1215 09:01:35.643443 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.643517 kubelet[2779]: E1215 09:01:35.643463 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.643825 kubelet[2779]: E1215 09:01:35.643771 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.643825 kubelet[2779]: W1215 09:01:35.643784 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.643907 kubelet[2779]: E1215 09:01:35.643833 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.644265 kubelet[2779]: E1215 09:01:35.644244 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.644265 kubelet[2779]: W1215 09:01:35.644259 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.644265 kubelet[2779]: E1215 09:01:35.644269 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.644571 kubelet[2779]: E1215 09:01:35.644537 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.644571 kubelet[2779]: W1215 09:01:35.644552 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.644571 kubelet[2779]: E1215 09:01:35.644561 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.645053 kubelet[2779]: E1215 09:01:35.645036 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.645053 kubelet[2779]: W1215 09:01:35.645049 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.645111 kubelet[2779]: E1215 09:01:35.645060 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.646945 kubelet[2779]: E1215 09:01:35.646920 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.646985 kubelet[2779]: W1215 09:01:35.646957 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.646985 kubelet[2779]: E1215 09:01:35.646968 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.647264 kubelet[2779]: E1215 09:01:35.647246 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.647264 kubelet[2779]: W1215 09:01:35.647258 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.647363 kubelet[2779]: E1215 09:01:35.647268 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.647559 kubelet[2779]: E1215 09:01:35.647526 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.647559 kubelet[2779]: W1215 09:01:35.647538 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.647559 kubelet[2779]: E1215 09:01:35.647546 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.647856 kubelet[2779]: E1215 09:01:35.647831 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.647856 kubelet[2779]: W1215 09:01:35.647843 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.647856 kubelet[2779]: E1215 09:01:35.647851 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.648102 kubelet[2779]: E1215 09:01:35.648064 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.648102 kubelet[2779]: W1215 09:01:35.648072 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.648102 kubelet[2779]: E1215 09:01:35.648080 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.648378 kubelet[2779]: E1215 09:01:35.648337 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.648378 kubelet[2779]: W1215 09:01:35.648377 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.648433 kubelet[2779]: E1215 09:01:35.648386 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.648827 kubelet[2779]: E1215 09:01:35.648589 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.648827 kubelet[2779]: W1215 09:01:35.648602 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.648827 kubelet[2779]: E1215 09:01:35.648751 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.648983 kubelet[2779]: E1215 09:01:35.648962 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.648983 kubelet[2779]: W1215 09:01:35.648975 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.649041 kubelet[2779]: E1215 09:01:35.648987 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.649304 kubelet[2779]: E1215 09:01:35.649264 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.649304 kubelet[2779]: W1215 09:01:35.649277 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.649304 kubelet[2779]: E1215 09:01:35.649285 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.649555 kubelet[2779]: E1215 09:01:35.649524 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.649555 kubelet[2779]: W1215 09:01:35.649550 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.649625 kubelet[2779]: E1215 09:01:35.649575 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.649886 kubelet[2779]: E1215 09:01:35.649836 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.649886 kubelet[2779]: W1215 09:01:35.649852 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.649886 kubelet[2779]: E1215 09:01:35.649860 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.650061 kubelet[2779]: E1215 09:01:35.650039 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.650061 kubelet[2779]: W1215 09:01:35.650052 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.650061 kubelet[2779]: E1215 09:01:35.650060 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.650360 kubelet[2779]: E1215 09:01:35.650263 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.650360 kubelet[2779]: W1215 09:01:35.650278 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.650360 kubelet[2779]: E1215 09:01:35.650288 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.650522 kubelet[2779]: E1215 09:01:35.650482 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.650522 kubelet[2779]: W1215 09:01:35.650495 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.650522 kubelet[2779]: E1215 09:01:35.650504 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.650759 kubelet[2779]: E1215 09:01:35.650675 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.650759 kubelet[2779]: W1215 09:01:35.650686 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.650759 kubelet[2779]: E1215 09:01:35.650694 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.650905 kubelet[2779]: E1215 09:01:35.650890 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.650905 kubelet[2779]: W1215 09:01:35.650902 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.651109 kubelet[2779]: E1215 09:01:35.650911 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.651109 kubelet[2779]: E1215 09:01:35.651078 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.651109 kubelet[2779]: W1215 09:01:35.651086 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.651109 kubelet[2779]: E1215 09:01:35.651093 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.651481 kubelet[2779]: E1215 09:01:35.651447 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.651481 kubelet[2779]: W1215 09:01:35.651463 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.651538 kubelet[2779]: E1215 09:01:35.651482 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.653204 kubelet[2779]: E1215 09:01:35.652412 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.653204 kubelet[2779]: W1215 09:01:35.652428 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.653204 kubelet[2779]: E1215 09:01:35.652437 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.653204 kubelet[2779]: E1215 09:01:35.652642 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.653204 kubelet[2779]: W1215 09:01:35.652650 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.653204 kubelet[2779]: E1215 09:01:35.652657 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.659106 systemd[1]: Started cri-containerd-7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b.scope - libcontainer container 7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b. Dec 15 09:01:35.661697 kubelet[2779]: E1215 09:01:35.661671 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:35.661697 kubelet[2779]: W1215 09:01:35.661692 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:35.662084 kubelet[2779]: E1215 09:01:35.661707 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:35.670000 audit: BPF prog-id=160 op=LOAD Dec 15 09:01:35.670000 audit: BPF prog-id=161 op=LOAD Dec 15 09:01:35.670000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762656230386330646635623062363133326139303333383666663832 Dec 15 09:01:35.671000 audit: BPF prog-id=161 op=UNLOAD Dec 15 09:01:35.671000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762656230386330646635623062363133326139303333383666663832 Dec 15 09:01:35.671000 audit: BPF prog-id=162 op=LOAD Dec 15 09:01:35.671000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762656230386330646635623062363133326139303333383666663832 Dec 15 09:01:35.671000 audit: BPF prog-id=163 op=LOAD Dec 15 09:01:35.671000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762656230386330646635623062363133326139303333383666663832 Dec 15 09:01:35.671000 audit: BPF prog-id=163 op=UNLOAD Dec 15 09:01:35.671000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762656230386330646635623062363133326139303333383666663832 Dec 15 09:01:35.671000 audit: BPF prog-id=162 op=UNLOAD Dec 15 09:01:35.671000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762656230386330646635623062363133326139303333383666663832 Dec 15 09:01:35.671000 audit: BPF prog-id=164 op=LOAD Dec 15 09:01:35.671000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:35.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762656230386330646635623062363133326139303333383666663832 Dec 15 09:01:35.690111 containerd[1611]: time="2025-12-15T09:01:35.690077216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zr66q,Uid:1c680467-02bd-473f-9d40-4266a611eafe,Namespace:calico-system,Attempt:0,} returns sandbox id \"7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b\"" Dec 15 09:01:35.690985 kubelet[2779]: E1215 09:01:35.690945 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:36.073000 audit[3388]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:36.075455 kernel: kauditd_printk_skb: 69 callbacks suppressed Dec 15 09:01:36.075523 kernel: audit: type=1325 audit(1765789296.073:549): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:36.073000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe285472c0 a2=0 a3=7ffe285472ac items=0 ppid=2893 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:36.083988 kernel: audit: type=1300 audit(1765789296.073:549): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe285472c0 a2=0 a3=7ffe285472ac items=0 ppid=2893 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:36.084030 kernel: audit: type=1327 audit(1765789296.073:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:36.073000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:36.087000 audit[3388]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:36.087000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe285472c0 a2=0 a3=0 items=0 ppid=2893 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:36.097327 kernel: audit: type=1325 audit(1765789296.087:550): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:01:36.097382 kernel: audit: type=1300 audit(1765789296.087:550): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe285472c0 a2=0 a3=0 items=0 ppid=2893 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:36.097411 kernel: audit: type=1327 audit(1765789296.087:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:36.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:01:36.719404 kubelet[2779]: E1215 09:01:36.719353 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:01:37.071042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339660740.mount: Deactivated successfully. Dec 15 09:01:38.338274 containerd[1611]: time="2025-12-15T09:01:38.338208017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:38.339161 containerd[1611]: time="2025-12-15T09:01:38.339119036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 15 09:01:38.341240 containerd[1611]: time="2025-12-15T09:01:38.341206745Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:38.353622 containerd[1611]: time="2025-12-15T09:01:38.353573882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:38.354135 containerd[1611]: time="2025-12-15T09:01:38.354090382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.77686378s" Dec 15 09:01:38.354135 containerd[1611]: time="2025-12-15T09:01:38.354120349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 15 09:01:38.354945 containerd[1611]: time="2025-12-15T09:01:38.354918242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 15 09:01:38.370604 containerd[1611]: time="2025-12-15T09:01:38.370551083Z" level=info msg="CreateContainer within sandbox \"ed185abe8fdd67113dcce57afccaa327f7bf80d26e292efac24f57a28ee78bb9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 15 09:01:38.378184 containerd[1611]: time="2025-12-15T09:01:38.378136080Z" level=info msg="Container 74f94b09c85efb5e07d6418572b8bc31be89d22e83a58db1c822b5d5577f0586: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:38.386102 containerd[1611]: time="2025-12-15T09:01:38.386052795Z" level=info msg="CreateContainer within sandbox \"ed185abe8fdd67113dcce57afccaa327f7bf80d26e292efac24f57a28ee78bb9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"74f94b09c85efb5e07d6418572b8bc31be89d22e83a58db1c822b5d5577f0586\"" Dec 15 09:01:38.386559 containerd[1611]: time="2025-12-15T09:01:38.386530642Z" level=info msg="StartContainer for \"74f94b09c85efb5e07d6418572b8bc31be89d22e83a58db1c822b5d5577f0586\"" Dec 15 09:01:38.387969 containerd[1611]: time="2025-12-15T09:01:38.387940616Z" level=info msg="connecting to shim 74f94b09c85efb5e07d6418572b8bc31be89d22e83a58db1c822b5d5577f0586" address="unix:///run/containerd/s/244983e1ab10ffba647ead04a0a56d3c789f4cf77d1b592deb6adb8815623cc3" protocol=ttrpc version=3 Dec 15 09:01:38.407055 systemd[1]: Started cri-containerd-74f94b09c85efb5e07d6418572b8bc31be89d22e83a58db1c822b5d5577f0586.scope - libcontainer container 74f94b09c85efb5e07d6418572b8bc31be89d22e83a58db1c822b5d5577f0586. Dec 15 09:01:38.421000 audit: BPF prog-id=165 op=LOAD Dec 15 09:01:38.424831 kernel: audit: type=1334 audit(1765789298.421:551): prog-id=165 op=LOAD Dec 15 09:01:38.424911 kernel: audit: type=1334 audit(1765789298.423:552): prog-id=166 op=LOAD Dec 15 09:01:38.423000 audit: BPF prog-id=166 op=LOAD Dec 15 09:01:38.423000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3239 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:38.431435 kernel: audit: type=1300 audit(1765789298.423:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3239 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:38.431518 kernel: audit: type=1327 audit(1765789298.423:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663934623039633835656662356530376436343138353732623862 Dec 15 09:01:38.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663934623039633835656662356530376436343138353732623862 Dec 15 09:01:38.423000 audit: BPF prog-id=166 op=UNLOAD Dec 15 09:01:38.423000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3239 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:38.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663934623039633835656662356530376436343138353732623862 Dec 15 09:01:38.423000 audit: BPF prog-id=167 op=LOAD Dec 15 09:01:38.423000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3239 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:38.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663934623039633835656662356530376436343138353732623862 Dec 15 09:01:38.423000 audit: BPF prog-id=168 op=LOAD Dec 15 09:01:38.423000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3239 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:38.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663934623039633835656662356530376436343138353732623862 Dec 15 09:01:38.424000 audit: BPF prog-id=168 op=UNLOAD Dec 15 09:01:38.424000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3239 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:38.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663934623039633835656662356530376436343138353732623862 Dec 15 09:01:38.424000 audit: BPF prog-id=167 op=UNLOAD Dec 15 09:01:38.424000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3239 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:38.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663934623039633835656662356530376436343138353732623862 Dec 15 09:01:38.424000 audit: BPF prog-id=169 op=LOAD Dec 15 09:01:38.424000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3239 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:38.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663934623039633835656662356530376436343138353732623862 Dec 15 09:01:38.604430 containerd[1611]: time="2025-12-15T09:01:38.604225238Z" level=info msg="StartContainer for \"74f94b09c85efb5e07d6418572b8bc31be89d22e83a58db1c822b5d5577f0586\" returns successfully" Dec 15 09:01:38.722386 kubelet[2779]: E1215 09:01:38.722312 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:01:38.773757 kubelet[2779]: E1215 09:01:38.773720 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:38.808488 kubelet[2779]: I1215 09:01:38.808424 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bf54fd9b7-dqrqd" podStartSLOduration=1.030505494 podStartE2EDuration="3.808408437s" podCreationTimestamp="2025-12-15 09:01:35 +0000 UTC" firstStartedPulling="2025-12-15 09:01:35.576890502 +0000 UTC m=+18.960662416" lastFinishedPulling="2025-12-15 09:01:38.354793445 +0000 UTC m=+21.738565359" observedRunningTime="2025-12-15 09:01:38.808097737 +0000 UTC m=+22.191869651" watchObservedRunningTime="2025-12-15 09:01:38.808408437 +0000 UTC m=+22.192180351" Dec 15 09:01:38.841141 kubelet[2779]: E1215 09:01:38.841113 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.841141 kubelet[2779]: W1215 09:01:38.841130 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.841141 kubelet[2779]: E1215 09:01:38.841148 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.841412 kubelet[2779]: E1215 09:01:38.841400 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.841412 kubelet[2779]: W1215 09:01:38.841409 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.841521 kubelet[2779]: E1215 09:01:38.841418 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.841658 kubelet[2779]: E1215 09:01:38.841643 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.841658 kubelet[2779]: W1215 09:01:38.841654 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.841743 kubelet[2779]: E1215 09:01:38.841662 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.841937 kubelet[2779]: E1215 09:01:38.841924 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.841937 kubelet[2779]: W1215 09:01:38.841936 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.842032 kubelet[2779]: E1215 09:01:38.841945 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.842136 kubelet[2779]: E1215 09:01:38.842124 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.842136 kubelet[2779]: W1215 09:01:38.842134 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.842220 kubelet[2779]: E1215 09:01:38.842151 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.842354 kubelet[2779]: E1215 09:01:38.842326 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.842354 kubelet[2779]: W1215 09:01:38.842335 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.842354 kubelet[2779]: E1215 09:01:38.842342 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.842586 kubelet[2779]: E1215 09:01:38.842569 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.842586 kubelet[2779]: W1215 09:01:38.842579 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.842586 kubelet[2779]: E1215 09:01:38.842587 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.842762 kubelet[2779]: E1215 09:01:38.842739 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.842762 kubelet[2779]: W1215 09:01:38.842757 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.842824 kubelet[2779]: E1215 09:01:38.842765 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.842944 kubelet[2779]: E1215 09:01:38.842930 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.842944 kubelet[2779]: W1215 09:01:38.842938 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.842992 kubelet[2779]: E1215 09:01:38.842954 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.843105 kubelet[2779]: E1215 09:01:38.843092 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.843105 kubelet[2779]: W1215 09:01:38.843099 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.843149 kubelet[2779]: E1215 09:01:38.843106 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.843250 kubelet[2779]: E1215 09:01:38.843237 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.843250 kubelet[2779]: W1215 09:01:38.843245 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.843290 kubelet[2779]: E1215 09:01:38.843251 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.843404 kubelet[2779]: E1215 09:01:38.843391 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.843404 kubelet[2779]: W1215 09:01:38.843399 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.843456 kubelet[2779]: E1215 09:01:38.843406 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.843573 kubelet[2779]: E1215 09:01:38.843557 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.843573 kubelet[2779]: W1215 09:01:38.843566 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.843634 kubelet[2779]: E1215 09:01:38.843576 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.843743 kubelet[2779]: E1215 09:01:38.843728 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.843743 kubelet[2779]: W1215 09:01:38.843736 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.843743 kubelet[2779]: E1215 09:01:38.843743 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.843941 kubelet[2779]: E1215 09:01:38.843925 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.843941 kubelet[2779]: W1215 09:01:38.843938 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.844022 kubelet[2779]: E1215 09:01:38.843948 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.872523 kubelet[2779]: E1215 09:01:38.872420 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.872523 kubelet[2779]: W1215 09:01:38.872444 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.872523 kubelet[2779]: E1215 09:01:38.872475 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.872749 kubelet[2779]: E1215 09:01:38.872732 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.872749 kubelet[2779]: W1215 09:01:38.872744 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.872821 kubelet[2779]: E1215 09:01:38.872753 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.873094 kubelet[2779]: E1215 09:01:38.873073 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.873094 kubelet[2779]: W1215 09:01:38.873087 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.873157 kubelet[2779]: E1215 09:01:38.873098 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.873376 kubelet[2779]: E1215 09:01:38.873360 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.873376 kubelet[2779]: W1215 09:01:38.873372 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.873441 kubelet[2779]: E1215 09:01:38.873416 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.873733 kubelet[2779]: E1215 09:01:38.873719 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.873733 kubelet[2779]: W1215 09:01:38.873731 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.873798 kubelet[2779]: E1215 09:01:38.873743 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.874049 kubelet[2779]: E1215 09:01:38.874028 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.874089 kubelet[2779]: W1215 09:01:38.874049 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.874089 kubelet[2779]: E1215 09:01:38.874059 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.874360 kubelet[2779]: E1215 09:01:38.874346 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.874360 kubelet[2779]: W1215 09:01:38.874358 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.874408 kubelet[2779]: E1215 09:01:38.874368 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.874599 kubelet[2779]: E1215 09:01:38.874585 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.874599 kubelet[2779]: W1215 09:01:38.874596 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.874639 kubelet[2779]: E1215 09:01:38.874606 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.874895 kubelet[2779]: E1215 09:01:38.874867 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.874895 kubelet[2779]: W1215 09:01:38.874880 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.874895 kubelet[2779]: E1215 09:01:38.874890 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.875488 kubelet[2779]: E1215 09:01:38.875435 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.875488 kubelet[2779]: W1215 09:01:38.875476 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.875559 kubelet[2779]: E1215 09:01:38.875505 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.875781 kubelet[2779]: E1215 09:01:38.875765 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.875781 kubelet[2779]: W1215 09:01:38.875778 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.875850 kubelet[2779]: E1215 09:01:38.875790 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.876045 kubelet[2779]: E1215 09:01:38.876029 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.876045 kubelet[2779]: W1215 09:01:38.876042 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.876091 kubelet[2779]: E1215 09:01:38.876052 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.876273 kubelet[2779]: E1215 09:01:38.876257 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.876273 kubelet[2779]: W1215 09:01:38.876270 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.876328 kubelet[2779]: E1215 09:01:38.876282 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.876561 kubelet[2779]: E1215 09:01:38.876544 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.876561 kubelet[2779]: W1215 09:01:38.876559 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.876608 kubelet[2779]: E1215 09:01:38.876584 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.876883 kubelet[2779]: E1215 09:01:38.876867 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.876883 kubelet[2779]: W1215 09:01:38.876880 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.876947 kubelet[2779]: E1215 09:01:38.876892 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.877136 kubelet[2779]: E1215 09:01:38.877123 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.877165 kubelet[2779]: W1215 09:01:38.877144 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.877165 kubelet[2779]: E1215 09:01:38.877156 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.877493 kubelet[2779]: E1215 09:01:38.877453 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.877493 kubelet[2779]: W1215 09:01:38.877485 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.877538 kubelet[2779]: E1215 09:01:38.877498 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:38.877727 kubelet[2779]: E1215 09:01:38.877704 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:38.877727 kubelet[2779]: W1215 09:01:38.877719 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:38.877774 kubelet[2779]: E1215 09:01:38.877730 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.775289 kubelet[2779]: I1215 09:01:39.775245 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 15 09:01:39.775677 kubelet[2779]: E1215 09:01:39.775616 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:39.850894 kubelet[2779]: E1215 09:01:39.850854 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.850894 kubelet[2779]: W1215 09:01:39.850877 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.850894 kubelet[2779]: E1215 09:01:39.850897 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.851084 kubelet[2779]: E1215 09:01:39.851059 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.851084 kubelet[2779]: W1215 09:01:39.851067 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.851084 kubelet[2779]: E1215 09:01:39.851074 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.851233 kubelet[2779]: E1215 09:01:39.851222 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.851233 kubelet[2779]: W1215 09:01:39.851228 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.851285 kubelet[2779]: E1215 09:01:39.851235 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.852847 kubelet[2779]: E1215 09:01:39.851703 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.852847 kubelet[2779]: W1215 09:01:39.851715 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.852847 kubelet[2779]: E1215 09:01:39.851723 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.852847 kubelet[2779]: E1215 09:01:39.851977 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.852847 kubelet[2779]: W1215 09:01:39.851987 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.852847 kubelet[2779]: E1215 09:01:39.851997 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.852847 kubelet[2779]: E1215 09:01:39.852156 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.852847 kubelet[2779]: W1215 09:01:39.852162 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.852847 kubelet[2779]: E1215 09:01:39.852170 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.852847 kubelet[2779]: E1215 09:01:39.852307 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.853122 kubelet[2779]: W1215 09:01:39.852314 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.853122 kubelet[2779]: E1215 09:01:39.852321 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.853122 kubelet[2779]: E1215 09:01:39.852471 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.853122 kubelet[2779]: W1215 09:01:39.852480 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.853122 kubelet[2779]: E1215 09:01:39.852491 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.853122 kubelet[2779]: E1215 09:01:39.852682 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.853122 kubelet[2779]: W1215 09:01:39.852689 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.853122 kubelet[2779]: E1215 09:01:39.852698 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.853122 kubelet[2779]: E1215 09:01:39.852945 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.853122 kubelet[2779]: W1215 09:01:39.852954 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.853331 kubelet[2779]: E1215 09:01:39.852964 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.853331 kubelet[2779]: E1215 09:01:39.853142 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.853331 kubelet[2779]: W1215 09:01:39.853151 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.853331 kubelet[2779]: E1215 09:01:39.853162 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.853412 kubelet[2779]: E1215 09:01:39.853375 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.853412 kubelet[2779]: W1215 09:01:39.853383 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.853412 kubelet[2779]: E1215 09:01:39.853392 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.853622 kubelet[2779]: E1215 09:01:39.853587 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.853663 kubelet[2779]: W1215 09:01:39.853622 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.853663 kubelet[2779]: E1215 09:01:39.853636 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.853850 kubelet[2779]: E1215 09:01:39.853836 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.853888 kubelet[2779]: W1215 09:01:39.853848 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.853888 kubelet[2779]: E1215 09:01:39.853860 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.854054 kubelet[2779]: E1215 09:01:39.854039 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.854054 kubelet[2779]: W1215 09:01:39.854051 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.854106 kubelet[2779]: E1215 09:01:39.854059 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.881655 kubelet[2779]: E1215 09:01:39.881637 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.881655 kubelet[2779]: W1215 09:01:39.881650 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.881747 kubelet[2779]: E1215 09:01:39.881661 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.881871 kubelet[2779]: E1215 09:01:39.881859 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.881871 kubelet[2779]: W1215 09:01:39.881868 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.881936 kubelet[2779]: E1215 09:01:39.881876 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.882107 kubelet[2779]: E1215 09:01:39.882093 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.882107 kubelet[2779]: W1215 09:01:39.882105 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.882178 kubelet[2779]: E1215 09:01:39.882114 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.882320 kubelet[2779]: E1215 09:01:39.882300 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.882320 kubelet[2779]: W1215 09:01:39.882308 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.882320 kubelet[2779]: E1215 09:01:39.882316 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.882506 kubelet[2779]: E1215 09:01:39.882491 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.882506 kubelet[2779]: W1215 09:01:39.882503 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.882506 kubelet[2779]: E1215 09:01:39.882513 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.882733 kubelet[2779]: E1215 09:01:39.882721 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.882733 kubelet[2779]: W1215 09:01:39.882731 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.882788 kubelet[2779]: E1215 09:01:39.882742 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.883009 kubelet[2779]: E1215 09:01:39.882979 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.883009 kubelet[2779]: W1215 09:01:39.882995 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.883009 kubelet[2779]: E1215 09:01:39.883005 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.883575 kubelet[2779]: E1215 09:01:39.883562 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.883575 kubelet[2779]: W1215 09:01:39.883572 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.883575 kubelet[2779]: E1215 09:01:39.883581 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.883910 kubelet[2779]: E1215 09:01:39.883878 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.883953 kubelet[2779]: W1215 09:01:39.883911 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.883992 kubelet[2779]: E1215 09:01:39.883946 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.884332 kubelet[2779]: E1215 09:01:39.884298 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.884332 kubelet[2779]: W1215 09:01:39.884313 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.884332 kubelet[2779]: E1215 09:01:39.884324 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.884605 kubelet[2779]: E1215 09:01:39.884591 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.884605 kubelet[2779]: W1215 09:01:39.884602 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.884679 kubelet[2779]: E1215 09:01:39.884612 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.884831 kubelet[2779]: E1215 09:01:39.884817 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.884831 kubelet[2779]: W1215 09:01:39.884827 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.884831 kubelet[2779]: E1215 09:01:39.884838 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.885061 kubelet[2779]: E1215 09:01:39.885049 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.885061 kubelet[2779]: W1215 09:01:39.885059 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.885121 kubelet[2779]: E1215 09:01:39.885067 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.885516 kubelet[2779]: E1215 09:01:39.885497 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.885516 kubelet[2779]: W1215 09:01:39.885512 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.885584 kubelet[2779]: E1215 09:01:39.885524 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.885750 kubelet[2779]: E1215 09:01:39.885727 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.885750 kubelet[2779]: W1215 09:01:39.885741 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.885816 kubelet[2779]: E1215 09:01:39.885751 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.886003 kubelet[2779]: E1215 09:01:39.885981 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.886003 kubelet[2779]: W1215 09:01:39.886001 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.886052 kubelet[2779]: E1215 09:01:39.886010 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.886283 kubelet[2779]: E1215 09:01:39.886261 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.886283 kubelet[2779]: W1215 09:01:39.886270 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.886283 kubelet[2779]: E1215 09:01:39.886279 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:39.886481 kubelet[2779]: E1215 09:01:39.886470 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 15 09:01:39.886481 kubelet[2779]: W1215 09:01:39.886479 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 15 09:01:39.886523 kubelet[2779]: E1215 09:01:39.886487 2779 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 15 09:01:40.519425 containerd[1611]: time="2025-12-15T09:01:40.519383217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:40.520525 containerd[1611]: time="2025-12-15T09:01:40.520484193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 15 09:01:40.522094 containerd[1611]: time="2025-12-15T09:01:40.522064988Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:40.524193 containerd[1611]: time="2025-12-15T09:01:40.524160379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:40.524714 containerd[1611]: time="2025-12-15T09:01:40.524662871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.169718149s" Dec 15 09:01:40.524745 containerd[1611]: time="2025-12-15T09:01:40.524711303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 15 09:01:40.527851 containerd[1611]: time="2025-12-15T09:01:40.527781781Z" level=info msg="CreateContainer within sandbox \"7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 15 09:01:40.537438 containerd[1611]: time="2025-12-15T09:01:40.537395758Z" level=info msg="Container 296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:40.546993 containerd[1611]: time="2025-12-15T09:01:40.546944603Z" level=info msg="CreateContainer within sandbox \"7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511\"" Dec 15 09:01:40.547705 containerd[1611]: time="2025-12-15T09:01:40.547660529Z" level=info msg="StartContainer for \"296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511\"" Dec 15 09:01:40.549479 containerd[1611]: time="2025-12-15T09:01:40.549390157Z" level=info msg="connecting to shim 296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511" address="unix:///run/containerd/s/933788b51f8abc6b906b77c2c686e4d71a30f983c1ebd3310232a19f682d2aff" protocol=ttrpc version=3 Dec 15 09:01:40.571957 systemd[1]: Started cri-containerd-296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511.scope - libcontainer container 296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511. Dec 15 09:01:40.642000 audit: BPF prog-id=170 op=LOAD Dec 15 09:01:40.642000 audit[3510]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3324 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:40.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239366366636431383066363663623231373835616136663064386366 Dec 15 09:01:40.643000 audit: BPF prog-id=171 op=LOAD Dec 15 09:01:40.643000 audit[3510]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3324 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:40.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239366366636431383066363663623231373835616136663064386366 Dec 15 09:01:40.643000 audit: BPF prog-id=171 op=UNLOAD Dec 15 09:01:40.643000 audit[3510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:40.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239366366636431383066363663623231373835616136663064386366 Dec 15 09:01:40.643000 audit: BPF prog-id=170 op=UNLOAD Dec 15 09:01:40.643000 audit[3510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:40.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239366366636431383066363663623231373835616136663064386366 Dec 15 09:01:40.643000 audit: BPF prog-id=172 op=LOAD Dec 15 09:01:40.643000 audit[3510]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3324 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:40.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239366366636431383066363663623231373835616136663064386366 Dec 15 09:01:40.663641 containerd[1611]: time="2025-12-15T09:01:40.663602464Z" level=info msg="StartContainer for \"296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511\" returns successfully" Dec 15 09:01:40.676415 systemd[1]: cri-containerd-296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511.scope: Deactivated successfully. Dec 15 09:01:40.679494 containerd[1611]: time="2025-12-15T09:01:40.679461135Z" level=info msg="received container exit event container_id:\"296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511\" id:\"296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511\" pid:3523 exited_at:{seconds:1765789300 nanos:679098067}" Dec 15 09:01:40.680000 audit: BPF prog-id=172 op=UNLOAD Dec 15 09:01:40.703389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-296cfcd180f66cb21785aa6f0d8cf0557aa24bd0449ce8f396c92c54e4296511-rootfs.mount: Deactivated successfully. Dec 15 09:01:40.719540 kubelet[2779]: E1215 09:01:40.719494 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:01:40.779346 kubelet[2779]: E1215 09:01:40.779180 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:41.782425 kubelet[2779]: E1215 09:01:41.782366 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:41.783525 containerd[1611]: time="2025-12-15T09:01:41.783432995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 15 09:01:42.719072 kubelet[2779]: E1215 09:01:42.719010 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:01:44.719743 kubelet[2779]: E1215 09:01:44.719666 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:01:45.934031 containerd[1611]: time="2025-12-15T09:01:45.933968787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:45.934867 containerd[1611]: time="2025-12-15T09:01:45.934823924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 15 09:01:45.936018 containerd[1611]: time="2025-12-15T09:01:45.935982535Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:45.937961 containerd[1611]: time="2025-12-15T09:01:45.937913507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:45.938420 containerd[1611]: time="2025-12-15T09:01:45.938368097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.154891761s" Dec 15 09:01:45.938463 containerd[1611]: time="2025-12-15T09:01:45.938419755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 15 09:01:45.942589 containerd[1611]: time="2025-12-15T09:01:45.942545316Z" level=info msg="CreateContainer within sandbox \"7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 15 09:01:45.950719 containerd[1611]: time="2025-12-15T09:01:45.950673649Z" level=info msg="Container 45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:45.958734 containerd[1611]: time="2025-12-15T09:01:45.958650485Z" level=info msg="CreateContainer within sandbox \"7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39\"" Dec 15 09:01:45.959229 containerd[1611]: time="2025-12-15T09:01:45.959204001Z" level=info msg="StartContainer for \"45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39\"" Dec 15 09:01:45.960720 containerd[1611]: time="2025-12-15T09:01:45.960692116Z" level=info msg="connecting to shim 45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39" address="unix:///run/containerd/s/933788b51f8abc6b906b77c2c686e4d71a30f983c1ebd3310232a19f682d2aff" protocol=ttrpc version=3 Dec 15 09:01:45.986954 systemd[1]: Started cri-containerd-45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39.scope - libcontainer container 45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39. Dec 15 09:01:46.052000 audit: BPF prog-id=173 op=LOAD Dec 15 09:01:46.054752 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 15 09:01:46.054927 kernel: audit: type=1334 audit(1765789306.052:565): prog-id=173 op=LOAD Dec 15 09:01:46.052000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3324 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:46.062217 kernel: audit: type=1300 audit(1765789306.052:565): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3324 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:46.062274 kernel: audit: type=1327 audit(1765789306.052:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623739313864316662383037653464393730343534366637636336 Dec 15 09:01:46.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623739313864316662383037653464393730343534366637636336 Dec 15 09:01:46.052000 audit: BPF prog-id=174 op=LOAD Dec 15 09:01:46.070241 kernel: audit: type=1334 audit(1765789306.052:566): prog-id=174 op=LOAD Dec 15 09:01:46.070288 kernel: audit: type=1300 audit(1765789306.052:566): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3324 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:46.052000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3324 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:46.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623739313864316662383037653464393730343534366637636336 Dec 15 09:01:46.083328 kernel: audit: type=1327 audit(1765789306.052:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623739313864316662383037653464393730343534366637636336 Dec 15 09:01:46.085140 kernel: audit: type=1334 audit(1765789306.052:567): prog-id=174 op=UNLOAD Dec 15 09:01:46.052000 audit: BPF prog-id=174 op=UNLOAD Dec 15 09:01:46.052000 audit[3570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:46.091124 containerd[1611]: time="2025-12-15T09:01:46.091079335Z" level=info msg="StartContainer for \"45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39\" returns successfully" Dec 15 09:01:46.091893 kernel: audit: type=1300 audit(1765789306.052:567): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:46.098678 kernel: audit: type=1327 audit(1765789306.052:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623739313864316662383037653464393730343534366637636336 Dec 15 09:01:46.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623739313864316662383037653464393730343534366637636336 Dec 15 09:01:46.052000 audit: BPF prog-id=173 op=UNLOAD Dec 15 09:01:46.100515 kernel: audit: type=1334 audit(1765789306.052:568): prog-id=173 op=UNLOAD Dec 15 09:01:46.052000 audit[3570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:46.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623739313864316662383037653464393730343534366637636336 Dec 15 09:01:46.052000 audit: BPF prog-id=175 op=LOAD Dec 15 09:01:46.052000 audit[3570]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3324 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:46.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623739313864316662383037653464393730343534366637636336 Dec 15 09:01:46.720479 kubelet[2779]: E1215 09:01:46.720439 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:01:46.792284 kubelet[2779]: E1215 09:01:46.792249 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:47.478077 systemd[1]: cri-containerd-45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39.scope: Deactivated successfully. Dec 15 09:01:47.478486 systemd[1]: cri-containerd-45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39.scope: Consumed 636ms CPU time, 180.4M memory peak, 2.4M read from disk, 171.3M written to disk. Dec 15 09:01:47.480587 containerd[1611]: time="2025-12-15T09:01:47.480550458Z" level=info msg="received container exit event container_id:\"45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39\" id:\"45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39\" pid:3583 exited_at:{seconds:1765789307 nanos:480351352}" Dec 15 09:01:47.483000 audit: BPF prog-id=175 op=UNLOAD Dec 15 09:01:47.504762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45b7918d1fb807e4d9704546f7cc6ae544605b8f63f5e1140dba6d938cde2b39-rootfs.mount: Deactivated successfully. Dec 15 09:01:47.526911 kubelet[2779]: I1215 09:01:47.526878 2779 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 15 09:01:47.562374 systemd[1]: Created slice kubepods-burstable-pod5d2dcae9_0cbf_434e_ac9e_e764a111e542.slice - libcontainer container kubepods-burstable-pod5d2dcae9_0cbf_434e_ac9e_e764a111e542.slice. Dec 15 09:01:47.571948 systemd[1]: Created slice kubepods-besteffort-pod4740f6ee_54b4_4ea9_b846_2cfc949dbd68.slice - libcontainer container kubepods-besteffort-pod4740f6ee_54b4_4ea9_b846_2cfc949dbd68.slice. Dec 15 09:01:47.578269 systemd[1]: Created slice kubepods-burstable-pod4a3713b5_5b2b_432f_a2d2_a9138faef31f.slice - libcontainer container kubepods-burstable-pod4a3713b5_5b2b_432f_a2d2_a9138faef31f.slice. Dec 15 09:01:47.583795 systemd[1]: Created slice kubepods-besteffort-pod6d673cd1_b1e0_408c_b153_e7bc92b80142.slice - libcontainer container kubepods-besteffort-pod6d673cd1_b1e0_408c_b153_e7bc92b80142.slice. Dec 15 09:01:47.589994 systemd[1]: Created slice kubepods-besteffort-pod1c5ab256_136a_4105_b959_3f278aa6f144.slice - libcontainer container kubepods-besteffort-pod1c5ab256_136a_4105_b959_3f278aa6f144.slice. Dec 15 09:01:47.596434 systemd[1]: Created slice kubepods-besteffort-pod8526aa49_e2fa_4e0c_9a1a_c5b8a64482bd.slice - libcontainer container kubepods-besteffort-pod8526aa49_e2fa_4e0c_9a1a_c5b8a64482bd.slice. Dec 15 09:01:47.601565 systemd[1]: Created slice kubepods-besteffort-pod5b0886e8_4f46_439e_a9a8_bea45b864b25.slice - libcontainer container kubepods-besteffort-pod5b0886e8_4f46_439e_a9a8_bea45b864b25.slice. Dec 15 09:01:47.639584 kubelet[2779]: I1215 09:01:47.639526 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd-calico-apiserver-certs\") pod \"calico-apiserver-5756d6c8cc-j8csg\" (UID: \"8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd\") " pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" Dec 15 09:01:47.639584 kubelet[2779]: I1215 09:01:47.639569 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c5ab256-136a-4105-b959-3f278aa6f144-goldmane-ca-bundle\") pod \"goldmane-666569f655-k9jfj\" (UID: \"1c5ab256-136a-4105-b959-3f278aa6f144\") " pod="calico-system/goldmane-666569f655-k9jfj" Dec 15 09:01:47.639772 kubelet[2779]: I1215 09:01:47.639633 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6mdh\" (UniqueName: \"kubernetes.io/projected/5b0886e8-4f46-439e-a9a8-bea45b864b25-kube-api-access-j6mdh\") pod \"calico-kube-controllers-86f9f87d6-hpvt5\" (UID: \"5b0886e8-4f46-439e-a9a8-bea45b864b25\") " pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" Dec 15 09:01:47.639772 kubelet[2779]: I1215 09:01:47.639695 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d673cd1-b1e0-408c-b153-e7bc92b80142-whisker-backend-key-pair\") pod \"whisker-74ddd9dd8b-ddzhm\" (UID: \"6d673cd1-b1e0-408c-b153-e7bc92b80142\") " pod="calico-system/whisker-74ddd9dd8b-ddzhm" Dec 15 09:01:47.639772 kubelet[2779]: I1215 09:01:47.639721 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5ab256-136a-4105-b959-3f278aa6f144-config\") pod \"goldmane-666569f655-k9jfj\" (UID: \"1c5ab256-136a-4105-b959-3f278aa6f144\") " pod="calico-system/goldmane-666569f655-k9jfj" Dec 15 09:01:47.639772 kubelet[2779]: I1215 09:01:47.639739 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d673cd1-b1e0-408c-b153-e7bc92b80142-whisker-ca-bundle\") pod \"whisker-74ddd9dd8b-ddzhm\" (UID: \"6d673cd1-b1e0-408c-b153-e7bc92b80142\") " pod="calico-system/whisker-74ddd9dd8b-ddzhm" Dec 15 09:01:47.639772 kubelet[2779]: I1215 09:01:47.639755 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1c5ab256-136a-4105-b959-3f278aa6f144-goldmane-key-pair\") pod \"goldmane-666569f655-k9jfj\" (UID: \"1c5ab256-136a-4105-b959-3f278aa6f144\") " pod="calico-system/goldmane-666569f655-k9jfj" Dec 15 09:01:47.639925 kubelet[2779]: I1215 09:01:47.639772 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a3713b5-5b2b-432f-a2d2-a9138faef31f-config-volume\") pod \"coredns-674b8bbfcf-ggrbh\" (UID: \"4a3713b5-5b2b-432f-a2d2-a9138faef31f\") " pod="kube-system/coredns-674b8bbfcf-ggrbh" Dec 15 09:01:47.639925 kubelet[2779]: I1215 09:01:47.639789 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl4g2\" (UniqueName: \"kubernetes.io/projected/1c5ab256-136a-4105-b959-3f278aa6f144-kube-api-access-vl4g2\") pod \"goldmane-666569f655-k9jfj\" (UID: \"1c5ab256-136a-4105-b959-3f278aa6f144\") " pod="calico-system/goldmane-666569f655-k9jfj" Dec 15 09:01:47.639925 kubelet[2779]: I1215 09:01:47.639830 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b0886e8-4f46-439e-a9a8-bea45b864b25-tigera-ca-bundle\") pod \"calico-kube-controllers-86f9f87d6-hpvt5\" (UID: \"5b0886e8-4f46-439e-a9a8-bea45b864b25\") " pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" Dec 15 09:01:47.639925 kubelet[2779]: I1215 09:01:47.639891 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krf6z\" (UniqueName: \"kubernetes.io/projected/4740f6ee-54b4-4ea9-b846-2cfc949dbd68-kube-api-access-krf6z\") pod \"calico-apiserver-5756d6c8cc-7vpbv\" (UID: \"4740f6ee-54b4-4ea9-b846-2cfc949dbd68\") " pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" Dec 15 09:01:47.639925 kubelet[2779]: I1215 09:01:47.639919 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzf52\" (UniqueName: \"kubernetes.io/projected/4a3713b5-5b2b-432f-a2d2-a9138faef31f-kube-api-access-xzf52\") pod \"coredns-674b8bbfcf-ggrbh\" (UID: \"4a3713b5-5b2b-432f-a2d2-a9138faef31f\") " pod="kube-system/coredns-674b8bbfcf-ggrbh" Dec 15 09:01:47.640048 kubelet[2779]: I1215 09:01:47.639952 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527f6\" (UniqueName: \"kubernetes.io/projected/6d673cd1-b1e0-408c-b153-e7bc92b80142-kube-api-access-527f6\") pod \"whisker-74ddd9dd8b-ddzhm\" (UID: \"6d673cd1-b1e0-408c-b153-e7bc92b80142\") " pod="calico-system/whisker-74ddd9dd8b-ddzhm" Dec 15 09:01:47.640048 kubelet[2779]: I1215 09:01:47.639974 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4740f6ee-54b4-4ea9-b846-2cfc949dbd68-calico-apiserver-certs\") pod \"calico-apiserver-5756d6c8cc-7vpbv\" (UID: \"4740f6ee-54b4-4ea9-b846-2cfc949dbd68\") " pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" Dec 15 09:01:47.640048 kubelet[2779]: I1215 09:01:47.639991 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2dcae9-0cbf-434e-ac9e-e764a111e542-config-volume\") pod \"coredns-674b8bbfcf-dmm9f\" (UID: \"5d2dcae9-0cbf-434e-ac9e-e764a111e542\") " pod="kube-system/coredns-674b8bbfcf-dmm9f" Dec 15 09:01:47.640048 kubelet[2779]: I1215 09:01:47.640010 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggr54\" (UniqueName: \"kubernetes.io/projected/8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd-kube-api-access-ggr54\") pod \"calico-apiserver-5756d6c8cc-j8csg\" (UID: \"8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd\") " pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" Dec 15 09:01:47.640048 kubelet[2779]: I1215 09:01:47.640035 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vbt\" (UniqueName: \"kubernetes.io/projected/5d2dcae9-0cbf-434e-ac9e-e764a111e542-kube-api-access-c7vbt\") pod \"coredns-674b8bbfcf-dmm9f\" (UID: \"5d2dcae9-0cbf-434e-ac9e-e764a111e542\") " pod="kube-system/coredns-674b8bbfcf-dmm9f" Dec 15 09:01:47.797135 kubelet[2779]: E1215 09:01:47.797101 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:47.797760 containerd[1611]: time="2025-12-15T09:01:47.797730637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 15 09:01:47.867477 kubelet[2779]: E1215 09:01:47.867434 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:47.868202 containerd[1611]: time="2025-12-15T09:01:47.867982142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dmm9f,Uid:5d2dcae9-0cbf-434e-ac9e-e764a111e542,Namespace:kube-system,Attempt:0,}" Dec 15 09:01:47.875678 containerd[1611]: time="2025-12-15T09:01:47.875640456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756d6c8cc-7vpbv,Uid:4740f6ee-54b4-4ea9-b846-2cfc949dbd68,Namespace:calico-apiserver,Attempt:0,}" Dec 15 09:01:47.882000 kubelet[2779]: E1215 09:01:47.881962 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:47.882428 containerd[1611]: time="2025-12-15T09:01:47.882364635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ggrbh,Uid:4a3713b5-5b2b-432f-a2d2-a9138faef31f,Namespace:kube-system,Attempt:0,}" Dec 15 09:01:47.888051 containerd[1611]: time="2025-12-15T09:01:47.888005366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74ddd9dd8b-ddzhm,Uid:6d673cd1-b1e0-408c-b153-e7bc92b80142,Namespace:calico-system,Attempt:0,}" Dec 15 09:01:47.893522 containerd[1611]: time="2025-12-15T09:01:47.893487598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9jfj,Uid:1c5ab256-136a-4105-b959-3f278aa6f144,Namespace:calico-system,Attempt:0,}" Dec 15 09:01:47.899967 containerd[1611]: time="2025-12-15T09:01:47.899930134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756d6c8cc-j8csg,Uid:8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd,Namespace:calico-apiserver,Attempt:0,}" Dec 15 09:01:47.905982 containerd[1611]: time="2025-12-15T09:01:47.905954159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f9f87d6-hpvt5,Uid:5b0886e8-4f46-439e-a9a8-bea45b864b25,Namespace:calico-system,Attempt:0,}" Dec 15 09:01:48.024603 containerd[1611]: time="2025-12-15T09:01:48.024480471Z" level=error msg="Failed to destroy network for sandbox \"3ad730ce6a959666a8f46356e5728830ba00c6c124bc1f53162e15675420eb61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.030318 containerd[1611]: time="2025-12-15T09:01:48.030271793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dmm9f,Uid:5d2dcae9-0cbf-434e-ac9e-e764a111e542,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad730ce6a959666a8f46356e5728830ba00c6c124bc1f53162e15675420eb61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.030729 containerd[1611]: time="2025-12-15T09:01:48.030589433Z" level=error msg="Failed to destroy network for sandbox \"38e36457fca56d07a703da35e0c2315659c9b36122c6e8a4426d1c05750795ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.031041 kubelet[2779]: E1215 09:01:48.030688 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad730ce6a959666a8f46356e5728830ba00c6c124bc1f53162e15675420eb61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.031041 kubelet[2779]: E1215 09:01:48.030770 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad730ce6a959666a8f46356e5728830ba00c6c124bc1f53162e15675420eb61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dmm9f" Dec 15 09:01:48.031041 kubelet[2779]: E1215 09:01:48.030791 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad730ce6a959666a8f46356e5728830ba00c6c124bc1f53162e15675420eb61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dmm9f" Dec 15 09:01:48.032563 kubelet[2779]: E1215 09:01:48.030867 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dmm9f_kube-system(5d2dcae9-0cbf-434e-ac9e-e764a111e542)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dmm9f_kube-system(5d2dcae9-0cbf-434e-ac9e-e764a111e542)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ad730ce6a959666a8f46356e5728830ba00c6c124bc1f53162e15675420eb61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dmm9f" podUID="5d2dcae9-0cbf-434e-ac9e-e764a111e542" Dec 15 09:01:48.033147 containerd[1611]: time="2025-12-15T09:01:48.033058097Z" level=error msg="Failed to destroy network for sandbox \"d84395f71e041ebd8534df33a9a5af2d7eb1917d2b6000c062ef6e68a22d7780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.037056 containerd[1611]: time="2025-12-15T09:01:48.036958247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756d6c8cc-j8csg,Uid:8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e36457fca56d07a703da35e0c2315659c9b36122c6e8a4426d1c05750795ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.037535 kubelet[2779]: E1215 09:01:48.037493 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e36457fca56d07a703da35e0c2315659c9b36122c6e8a4426d1c05750795ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.037609 kubelet[2779]: E1215 09:01:48.037559 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e36457fca56d07a703da35e0c2315659c9b36122c6e8a4426d1c05750795ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" Dec 15 09:01:48.037609 kubelet[2779]: E1215 09:01:48.037581 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e36457fca56d07a703da35e0c2315659c9b36122c6e8a4426d1c05750795ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" Dec 15 09:01:48.037740 kubelet[2779]: E1215 09:01:48.037643 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5756d6c8cc-j8csg_calico-apiserver(8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5756d6c8cc-j8csg_calico-apiserver(8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38e36457fca56d07a703da35e0c2315659c9b36122c6e8a4426d1c05750795ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" podUID="8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd" Dec 15 09:01:48.039878 containerd[1611]: time="2025-12-15T09:01:48.039830464Z" level=error msg="Failed to destroy network for sandbox \"1536f12f73716d625fa09719304dcfff71643ecb99cd1214577fd341288294d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.041622 containerd[1611]: time="2025-12-15T09:01:48.041560362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756d6c8cc-7vpbv,Uid:4740f6ee-54b4-4ea9-b846-2cfc949dbd68,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84395f71e041ebd8534df33a9a5af2d7eb1917d2b6000c062ef6e68a22d7780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.042306 kubelet[2779]: E1215 09:01:48.042251 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84395f71e041ebd8534df33a9a5af2d7eb1917d2b6000c062ef6e68a22d7780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.042393 kubelet[2779]: E1215 09:01:48.042322 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84395f71e041ebd8534df33a9a5af2d7eb1917d2b6000c062ef6e68a22d7780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" Dec 15 09:01:48.042393 kubelet[2779]: E1215 09:01:48.042350 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84395f71e041ebd8534df33a9a5af2d7eb1917d2b6000c062ef6e68a22d7780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" Dec 15 09:01:48.042457 kubelet[2779]: E1215 09:01:48.042430 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5756d6c8cc-7vpbv_calico-apiserver(4740f6ee-54b4-4ea9-b846-2cfc949dbd68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5756d6c8cc-7vpbv_calico-apiserver(4740f6ee-54b4-4ea9-b846-2cfc949dbd68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d84395f71e041ebd8534df33a9a5af2d7eb1917d2b6000c062ef6e68a22d7780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" podUID="4740f6ee-54b4-4ea9-b846-2cfc949dbd68" Dec 15 09:01:48.043468 containerd[1611]: time="2025-12-15T09:01:48.043427651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9jfj,Uid:1c5ab256-136a-4105-b959-3f278aa6f144,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1536f12f73716d625fa09719304dcfff71643ecb99cd1214577fd341288294d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.043782 kubelet[2779]: E1215 09:01:48.043742 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1536f12f73716d625fa09719304dcfff71643ecb99cd1214577fd341288294d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.044537 kubelet[2779]: E1215 09:01:48.044478 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1536f12f73716d625fa09719304dcfff71643ecb99cd1214577fd341288294d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k9jfj" Dec 15 09:01:48.044537 kubelet[2779]: E1215 09:01:48.044509 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1536f12f73716d625fa09719304dcfff71643ecb99cd1214577fd341288294d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k9jfj" Dec 15 09:01:48.044818 kubelet[2779]: E1215 09:01:48.044747 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-k9jfj_calico-system(1c5ab256-136a-4105-b959-3f278aa6f144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-k9jfj_calico-system(1c5ab256-136a-4105-b959-3f278aa6f144)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1536f12f73716d625fa09719304dcfff71643ecb99cd1214577fd341288294d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-k9jfj" podUID="1c5ab256-136a-4105-b959-3f278aa6f144" Dec 15 09:01:48.050656 containerd[1611]: time="2025-12-15T09:01:48.050551781Z" level=error msg="Failed to destroy network for sandbox \"f28181d089ed53adc770be03679f34212fa2f7fb1c5f0e047c8b454272dc3ea1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.051140 containerd[1611]: time="2025-12-15T09:01:48.051097572Z" level=error msg="Failed to destroy network for sandbox \"84f0f5888c325d3367a5672201dd4da08c459374c7ba3e790359e156b81e079a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.052926 containerd[1611]: time="2025-12-15T09:01:48.052864721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f9f87d6-hpvt5,Uid:5b0886e8-4f46-439e-a9a8-bea45b864b25,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f28181d089ed53adc770be03679f34212fa2f7fb1c5f0e047c8b454272dc3ea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.053395 kubelet[2779]: E1215 09:01:48.053346 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f28181d089ed53adc770be03679f34212fa2f7fb1c5f0e047c8b454272dc3ea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.053480 kubelet[2779]: E1215 09:01:48.053409 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f28181d089ed53adc770be03679f34212fa2f7fb1c5f0e047c8b454272dc3ea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" Dec 15 09:01:48.053480 kubelet[2779]: E1215 09:01:48.053431 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f28181d089ed53adc770be03679f34212fa2f7fb1c5f0e047c8b454272dc3ea1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" Dec 15 09:01:48.053560 kubelet[2779]: E1215 09:01:48.053491 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f9f87d6-hpvt5_calico-system(5b0886e8-4f46-439e-a9a8-bea45b864b25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f9f87d6-hpvt5_calico-system(5b0886e8-4f46-439e-a9a8-bea45b864b25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f28181d089ed53adc770be03679f34212fa2f7fb1c5f0e047c8b454272dc3ea1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" podUID="5b0886e8-4f46-439e-a9a8-bea45b864b25" Dec 15 09:01:48.055138 containerd[1611]: time="2025-12-15T09:01:48.055093343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ggrbh,Uid:4a3713b5-5b2b-432f-a2d2-a9138faef31f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f0f5888c325d3367a5672201dd4da08c459374c7ba3e790359e156b81e079a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.055364 kubelet[2779]: E1215 09:01:48.055323 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f0f5888c325d3367a5672201dd4da08c459374c7ba3e790359e156b81e079a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.055479 kubelet[2779]: E1215 09:01:48.055406 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f0f5888c325d3367a5672201dd4da08c459374c7ba3e790359e156b81e079a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ggrbh" Dec 15 09:01:48.055479 kubelet[2779]: E1215 09:01:48.055427 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84f0f5888c325d3367a5672201dd4da08c459374c7ba3e790359e156b81e079a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ggrbh" Dec 15 09:01:48.055608 kubelet[2779]: E1215 09:01:48.055482 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ggrbh_kube-system(4a3713b5-5b2b-432f-a2d2-a9138faef31f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ggrbh_kube-system(4a3713b5-5b2b-432f-a2d2-a9138faef31f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84f0f5888c325d3367a5672201dd4da08c459374c7ba3e790359e156b81e079a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ggrbh" podUID="4a3713b5-5b2b-432f-a2d2-a9138faef31f" Dec 15 09:01:48.059316 containerd[1611]: time="2025-12-15T09:01:48.059243444Z" level=error msg="Failed to destroy network for sandbox \"b619fbdb92abe3a8d048c3b2f4b17e31ffc6e719111a6fc645ce91932fdb6ac0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.061492 containerd[1611]: time="2025-12-15T09:01:48.061439313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74ddd9dd8b-ddzhm,Uid:6d673cd1-b1e0-408c-b153-e7bc92b80142,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b619fbdb92abe3a8d048c3b2f4b17e31ffc6e719111a6fc645ce91932fdb6ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.061687 kubelet[2779]: E1215 09:01:48.061649 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b619fbdb92abe3a8d048c3b2f4b17e31ffc6e719111a6fc645ce91932fdb6ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.061730 kubelet[2779]: E1215 09:01:48.061697 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b619fbdb92abe3a8d048c3b2f4b17e31ffc6e719111a6fc645ce91932fdb6ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74ddd9dd8b-ddzhm" Dec 15 09:01:48.061763 kubelet[2779]: E1215 09:01:48.061728 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b619fbdb92abe3a8d048c3b2f4b17e31ffc6e719111a6fc645ce91932fdb6ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74ddd9dd8b-ddzhm" Dec 15 09:01:48.061792 kubelet[2779]: E1215 09:01:48.061768 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74ddd9dd8b-ddzhm_calico-system(6d673cd1-b1e0-408c-b153-e7bc92b80142)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74ddd9dd8b-ddzhm_calico-system(6d673cd1-b1e0-408c-b153-e7bc92b80142)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b619fbdb92abe3a8d048c3b2f4b17e31ffc6e719111a6fc645ce91932fdb6ac0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74ddd9dd8b-ddzhm" podUID="6d673cd1-b1e0-408c-b153-e7bc92b80142" Dec 15 09:01:48.725824 systemd[1]: Created slice kubepods-besteffort-pod47f37f66_a36c_44b9_8447_de6c1cff5809.slice - libcontainer container kubepods-besteffort-pod47f37f66_a36c_44b9_8447_de6c1cff5809.slice. Dec 15 09:01:48.728394 containerd[1611]: time="2025-12-15T09:01:48.728342201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qg84l,Uid:47f37f66-a36c-44b9-8447-de6c1cff5809,Namespace:calico-system,Attempt:0,}" Dec 15 09:01:48.777787 containerd[1611]: time="2025-12-15T09:01:48.777730538Z" level=error msg="Failed to destroy network for sandbox \"c1bd1f7f927a63a2d4c340d44ea66a4c018f7f0ef407264e0370726095ca7c89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.779799 systemd[1]: run-netns-cni\x2daabcbbdf\x2d887a\x2dc4a6\x2d7757\x2dc3e53933b580.mount: Deactivated successfully. Dec 15 09:01:48.782612 containerd[1611]: time="2025-12-15T09:01:48.782512284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qg84l,Uid:47f37f66-a36c-44b9-8447-de6c1cff5809,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1bd1f7f927a63a2d4c340d44ea66a4c018f7f0ef407264e0370726095ca7c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.782784 kubelet[2779]: E1215 09:01:48.782733 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1bd1f7f927a63a2d4c340d44ea66a4c018f7f0ef407264e0370726095ca7c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 15 09:01:48.782855 kubelet[2779]: E1215 09:01:48.782822 2779 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1bd1f7f927a63a2d4c340d44ea66a4c018f7f0ef407264e0370726095ca7c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qg84l" Dec 15 09:01:48.782892 kubelet[2779]: E1215 09:01:48.782846 2779 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1bd1f7f927a63a2d4c340d44ea66a4c018f7f0ef407264e0370726095ca7c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qg84l" Dec 15 09:01:48.782935 kubelet[2779]: E1215 09:01:48.782908 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qg84l_calico-system(47f37f66-a36c-44b9-8447-de6c1cff5809)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qg84l_calico-system(47f37f66-a36c-44b9-8447-de6c1cff5809)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1bd1f7f927a63a2d4c340d44ea66a4c018f7f0ef407264e0370726095ca7c89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:01:57.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.128:22-10.0.0.1:59230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:57.262402 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:59230.service - OpenSSH per-connection server daemon (10.0.0.1:59230). Dec 15 09:01:57.268141 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 15 09:01:57.268269 kernel: audit: type=1130 audit(1765789317.260:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.128:22-10.0.0.1:59230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:57.727000 audit[3891]: USER_ACCT pid=3891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:57.731869 sshd-session[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:01:57.733332 sshd[3891]: Accepted publickey for core from 10.0.0.1 port 59230 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:01:57.734823 kernel: audit: type=1101 audit(1765789317.727:572): pid=3891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:57.729000 audit[3891]: CRED_ACQ pid=3891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:57.736815 systemd-logind[1586]: New session 9 of user core. Dec 15 09:01:57.740061 kernel: audit: type=1103 audit(1765789317.729:573): pid=3891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:57.729000 audit[3891]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0cda5530 a2=3 a3=0 items=0 ppid=1 pid=3891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:57.804259 kernel: audit: type=1006 audit(1765789317.729:574): pid=3891 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 15 09:01:57.804318 kernel: audit: type=1300 audit(1765789317.729:574): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0cda5530 a2=3 a3=0 items=0 ppid=1 pid=3891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:57.729000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:01:57.806889 kernel: audit: type=1327 audit(1765789317.729:574): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:01:57.807127 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 15 09:01:57.809289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109466814.mount: Deactivated successfully. Dec 15 09:01:57.809000 audit[3891]: USER_START pid=3891 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:57.822524 kernel: audit: type=1105 audit(1765789317.809:575): pid=3891 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:57.822603 kernel: audit: type=1103 audit(1765789317.813:576): pid=3895 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:57.813000 audit[3895]: CRED_ACQ pid=3895 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:58.254756 sshd[3895]: Connection closed by 10.0.0.1 port 59230 Dec 15 09:01:58.255070 sshd-session[3891]: pam_unix(sshd:session): session closed for user core Dec 15 09:01:58.254000 audit[3891]: USER_END pid=3891 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:58.260014 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:59230.service: Deactivated successfully. Dec 15 09:01:58.262621 systemd[1]: session-9.scope: Deactivated successfully. Dec 15 09:01:58.254000 audit[3891]: CRED_DISP pid=3891 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:58.264171 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Dec 15 09:01:58.265085 systemd-logind[1586]: Removed session 9. Dec 15 09:01:58.267076 kernel: audit: type=1106 audit(1765789318.254:577): pid=3891 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:58.267126 kernel: audit: type=1104 audit(1765789318.254:578): pid=3891 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:01:58.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.128:22-10.0.0.1:59230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:01:58.926532 containerd[1611]: time="2025-12-15T09:01:58.926461212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:58.927296 containerd[1611]: time="2025-12-15T09:01:58.927258365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 15 09:01:58.928334 containerd[1611]: time="2025-12-15T09:01:58.928274321Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:58.930685 containerd[1611]: time="2025-12-15T09:01:58.930652495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 15 09:01:58.931832 containerd[1611]: time="2025-12-15T09:01:58.931330373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.133279171s" Dec 15 09:01:58.931832 containerd[1611]: time="2025-12-15T09:01:58.931385587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 15 09:01:58.948002 containerd[1611]: time="2025-12-15T09:01:58.947924173Z" level=info msg="CreateContainer within sandbox \"7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 15 09:01:58.958605 containerd[1611]: time="2025-12-15T09:01:58.958565779Z" level=info msg="Container 080e564396eec30e6bfa2b46b75d25ed158a1d1a1978f0202b5e26b10148fbca: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:01:58.967655 containerd[1611]: time="2025-12-15T09:01:58.967598211Z" level=info msg="CreateContainer within sandbox \"7beb08c0df5b0b6132a903386ff82aa1aabb964c17238ef6b3fc6fa252b7275b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"080e564396eec30e6bfa2b46b75d25ed158a1d1a1978f0202b5e26b10148fbca\"" Dec 15 09:01:58.968301 containerd[1611]: time="2025-12-15T09:01:58.968229462Z" level=info msg="StartContainer for \"080e564396eec30e6bfa2b46b75d25ed158a1d1a1978f0202b5e26b10148fbca\"" Dec 15 09:01:58.970154 containerd[1611]: time="2025-12-15T09:01:58.970110709Z" level=info msg="connecting to shim 080e564396eec30e6bfa2b46b75d25ed158a1d1a1978f0202b5e26b10148fbca" address="unix:///run/containerd/s/933788b51f8abc6b906b77c2c686e4d71a30f983c1ebd3310232a19f682d2aff" protocol=ttrpc version=3 Dec 15 09:01:58.999004 systemd[1]: Started cri-containerd-080e564396eec30e6bfa2b46b75d25ed158a1d1a1978f0202b5e26b10148fbca.scope - libcontainer container 080e564396eec30e6bfa2b46b75d25ed158a1d1a1978f0202b5e26b10148fbca. Dec 15 09:01:59.081000 audit: BPF prog-id=176 op=LOAD Dec 15 09:01:59.081000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3324 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:59.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038306535363433393665656333306536626661326234366237356432 Dec 15 09:01:59.081000 audit: BPF prog-id=177 op=LOAD Dec 15 09:01:59.081000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3324 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:59.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038306535363433393665656333306536626661326234366237356432 Dec 15 09:01:59.081000 audit: BPF prog-id=177 op=UNLOAD Dec 15 09:01:59.081000 audit[3910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:59.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038306535363433393665656333306536626661326234366237356432 Dec 15 09:01:59.081000 audit: BPF prog-id=176 op=UNLOAD Dec 15 09:01:59.081000 audit[3910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:59.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038306535363433393665656333306536626661326234366237356432 Dec 15 09:01:59.081000 audit: BPF prog-id=178 op=LOAD Dec 15 09:01:59.081000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3324 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:01:59.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038306535363433393665656333306536626661326234366237356432 Dec 15 09:01:59.107239 containerd[1611]: time="2025-12-15T09:01:59.107159638Z" level=info msg="StartContainer for \"080e564396eec30e6bfa2b46b75d25ed158a1d1a1978f0202b5e26b10148fbca\" returns successfully" Dec 15 09:01:59.188065 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 15 09:01:59.188182 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 15 09:01:59.460057 kubelet[2779]: I1215 09:01:59.460012 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d673cd1-b1e0-408c-b153-e7bc92b80142-whisker-ca-bundle\") pod \"6d673cd1-b1e0-408c-b153-e7bc92b80142\" (UID: \"6d673cd1-b1e0-408c-b153-e7bc92b80142\") " Dec 15 09:01:59.460057 kubelet[2779]: I1215 09:01:59.460061 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d673cd1-b1e0-408c-b153-e7bc92b80142-whisker-backend-key-pair\") pod \"6d673cd1-b1e0-408c-b153-e7bc92b80142\" (UID: \"6d673cd1-b1e0-408c-b153-e7bc92b80142\") " Dec 15 09:01:59.460602 kubelet[2779]: I1215 09:01:59.460560 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d673cd1-b1e0-408c-b153-e7bc92b80142-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6d673cd1-b1e0-408c-b153-e7bc92b80142" (UID: "6d673cd1-b1e0-408c-b153-e7bc92b80142"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 15 09:01:59.460910 kubelet[2779]: I1215 09:01:59.460724 2779 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-527f6\" (UniqueName: \"kubernetes.io/projected/6d673cd1-b1e0-408c-b153-e7bc92b80142-kube-api-access-527f6\") pod \"6d673cd1-b1e0-408c-b153-e7bc92b80142\" (UID: \"6d673cd1-b1e0-408c-b153-e7bc92b80142\") " Dec 15 09:01:59.460910 kubelet[2779]: I1215 09:01:59.460873 2779 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d673cd1-b1e0-408c-b153-e7bc92b80142-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 15 09:01:59.463893 kubelet[2779]: I1215 09:01:59.463848 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d673cd1-b1e0-408c-b153-e7bc92b80142-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6d673cd1-b1e0-408c-b153-e7bc92b80142" (UID: "6d673cd1-b1e0-408c-b153-e7bc92b80142"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 15 09:01:59.464043 kubelet[2779]: I1215 09:01:59.464013 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d673cd1-b1e0-408c-b153-e7bc92b80142-kube-api-access-527f6" (OuterVolumeSpecName: "kube-api-access-527f6") pod "6d673cd1-b1e0-408c-b153-e7bc92b80142" (UID: "6d673cd1-b1e0-408c-b153-e7bc92b80142"). InnerVolumeSpecName "kube-api-access-527f6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 15 09:01:59.561409 kubelet[2779]: I1215 09:01:59.561362 2779 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-527f6\" (UniqueName: \"kubernetes.io/projected/6d673cd1-b1e0-408c-b153-e7bc92b80142-kube-api-access-527f6\") on node \"localhost\" DevicePath \"\"" Dec 15 09:01:59.561409 kubelet[2779]: I1215 09:01:59.561385 2779 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d673cd1-b1e0-408c-b153-e7bc92b80142-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 15 09:01:59.795021 kubelet[2779]: I1215 09:01:59.794982 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 15 09:01:59.795576 kubelet[2779]: E1215 09:01:59.795313 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:59.823385 kubelet[2779]: E1215 09:01:59.823347 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:59.823550 kubelet[2779]: E1215 09:01:59.823528 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:01:59.832343 systemd[1]: Removed slice kubepods-besteffort-pod6d673cd1_b1e0_408c_b153_e7bc92b80142.slice - libcontainer container kubepods-besteffort-pod6d673cd1_b1e0_408c_b153_e7bc92b80142.slice. Dec 15 09:01:59.941111 systemd[1]: var-lib-kubelet-pods-6d673cd1\x2db1e0\x2d408c\x2db153\x2de7bc92b80142-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d527f6.mount: Deactivated successfully. Dec 15 09:01:59.941251 systemd[1]: var-lib-kubelet-pods-6d673cd1\x2db1e0\x2d408c\x2db153\x2de7bc92b80142-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 15 09:02:00.719645 containerd[1611]: time="2025-12-15T09:02:00.719593261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756d6c8cc-j8csg,Uid:8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd,Namespace:calico-apiserver,Attempt:0,}" Dec 15 09:02:00.824650 kubelet[2779]: E1215 09:02:00.824620 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:01.277444 kubelet[2779]: I1215 09:02:01.276924 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zr66q" podStartSLOduration=3.034174859 podStartE2EDuration="26.276903839s" podCreationTimestamp="2025-12-15 09:01:35 +0000 UTC" firstStartedPulling="2025-12-15 09:01:35.691570121 +0000 UTC m=+19.075342035" lastFinishedPulling="2025-12-15 09:01:58.934299101 +0000 UTC m=+42.318071015" observedRunningTime="2025-12-15 09:02:01.268375374 +0000 UTC m=+44.652147298" watchObservedRunningTime="2025-12-15 09:02:01.276903839 +0000 UTC m=+44.660675763" Dec 15 09:02:01.278000 audit[4027]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:01.278000 audit[4027]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7ef3bf60 a2=0 a3=7ffe7ef3bf4c items=0 ppid=2893 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:01.287000 audit[4027]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:01.287000 audit[4027]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe7ef3bf60 a2=0 a3=7ffe7ef3bf4c items=0 ppid=2893 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:01.299269 systemd[1]: Created slice kubepods-besteffort-pod96d2f283_85f5_46e8_a0e8_3d26c4d28535.slice - libcontainer container kubepods-besteffort-pod96d2f283_85f5_46e8_a0e8_3d26c4d28535.slice. Dec 15 09:02:01.403250 systemd-networkd[1316]: cali1077caedc3c: Link UP Dec 15 09:02:01.403483 systemd-networkd[1316]: cali1077caedc3c: Gained carrier Dec 15 09:02:01.416084 containerd[1611]: 2025-12-15 09:02:01.235 [INFO][4014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 15 09:02:01.416084 containerd[1611]: 2025-12-15 09:02:01.263 [INFO][4014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0 calico-apiserver-5756d6c8cc- calico-apiserver 8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd 895 0 2025-12-15 09:01:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5756d6c8cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5756d6c8cc-j8csg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1077caedc3c [] [] }} ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-j8csg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-" Dec 15 09:02:01.416084 containerd[1611]: 2025-12-15 09:02:01.264 [INFO][4014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-j8csg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" Dec 15 09:02:01.416084 containerd[1611]: 2025-12-15 09:02:01.362 [INFO][4030] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" HandleID="k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Workload="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.362 [INFO][4030] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" HandleID="k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Workload="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000de550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5756d6c8cc-j8csg", "timestamp":"2025-12-15 09:02:01.362164559 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.363 [INFO][4030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.363 [INFO][4030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.363 [INFO][4030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.371 [INFO][4030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" host="localhost" Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.376 [INFO][4030] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.380 [INFO][4030] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.381 [INFO][4030] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.383 [INFO][4030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:01.416317 containerd[1611]: 2025-12-15 09:02:01.383 [INFO][4030] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" host="localhost" Dec 15 09:02:01.416558 containerd[1611]: 2025-12-15 09:02:01.384 [INFO][4030] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8 Dec 15 09:02:01.416558 containerd[1611]: 2025-12-15 09:02:01.387 [INFO][4030] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" host="localhost" Dec 15 09:02:01.416558 containerd[1611]: 2025-12-15 09:02:01.391 [INFO][4030] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" host="localhost" Dec 15 09:02:01.416558 containerd[1611]: 2025-12-15 09:02:01.391 [INFO][4030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" host="localhost" Dec 15 09:02:01.416558 containerd[1611]: 2025-12-15 09:02:01.391 [INFO][4030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 15 09:02:01.416558 containerd[1611]: 2025-12-15 09:02:01.391 [INFO][4030] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" HandleID="k8s-pod-network.40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Workload="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" Dec 15 09:02:01.416687 containerd[1611]: 2025-12-15 09:02:01.395 [INFO][4014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-j8csg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0", GenerateName:"calico-apiserver-5756d6c8cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5756d6c8cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5756d6c8cc-j8csg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1077caedc3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:01.416744 containerd[1611]: 2025-12-15 09:02:01.395 [INFO][4014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-j8csg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" Dec 15 09:02:01.416744 containerd[1611]: 2025-12-15 09:02:01.395 [INFO][4014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1077caedc3c ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-j8csg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" Dec 15 09:02:01.416744 containerd[1611]: 2025-12-15 09:02:01.403 [INFO][4014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-j8csg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" Dec 15 09:02:01.416836 containerd[1611]: 2025-12-15 09:02:01.403 [INFO][4014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-j8csg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0", GenerateName:"calico-apiserver-5756d6c8cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5756d6c8cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8", Pod:"calico-apiserver-5756d6c8cc-j8csg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1077caedc3c", MAC:"36:f5:be:c9:44:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:01.416884 containerd[1611]: 2025-12-15 09:02:01.412 [INFO][4014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-j8csg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--j8csg-eth0" Dec 15 09:02:01.441476 containerd[1611]: time="2025-12-15T09:02:01.441422273Z" level=info msg="connecting to shim 40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8" address="unix:///run/containerd/s/5e8ad56393b0a11247d9ea52dfef3ff0cbace8959287efdc67c3c21087bb2d9f" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:02:01.462988 systemd[1]: Started cri-containerd-40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8.scope - libcontainer container 40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8. Dec 15 09:02:01.473000 audit: BPF prog-id=179 op=LOAD Dec 15 09:02:01.474000 audit: BPF prog-id=180 op=LOAD Dec 15 09:02:01.474000 audit[4076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa238 a2=98 a3=0 items=0 ppid=4064 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353833643537633063316235343161653931353961613439653832 Dec 15 09:02:01.474000 audit: BPF prog-id=180 op=UNLOAD Dec 15 09:02:01.474000 audit[4076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4064 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353833643537633063316235343161653931353961613439653832 Dec 15 09:02:01.474000 audit: BPF prog-id=181 op=LOAD Dec 15 09:02:01.474000 audit[4076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa488 a2=98 a3=0 items=0 ppid=4064 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353833643537633063316235343161653931353961613439653832 Dec 15 09:02:01.474000 audit: BPF prog-id=182 op=LOAD Dec 15 09:02:01.474000 audit[4076]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fa218 a2=98 a3=0 items=0 ppid=4064 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353833643537633063316235343161653931353961613439653832 Dec 15 09:02:01.474000 audit: BPF prog-id=182 op=UNLOAD Dec 15 09:02:01.474000 audit[4076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4064 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353833643537633063316235343161653931353961613439653832 Dec 15 09:02:01.474000 audit: BPF prog-id=181 op=UNLOAD Dec 15 09:02:01.474000 audit[4076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4064 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353833643537633063316235343161653931353961613439653832 Dec 15 09:02:01.474000 audit: BPF prog-id=183 op=LOAD Dec 15 09:02:01.474000 audit[4076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa6e8 a2=98 a3=0 items=0 ppid=4064 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353833643537633063316235343161653931353961613439653832 Dec 15 09:02:01.476960 kubelet[2779]: I1215 09:02:01.476932 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/96d2f283-85f5-46e8-a0e8-3d26c4d28535-whisker-backend-key-pair\") pod \"whisker-77fb68b4b8-6g7q8\" (UID: \"96d2f283-85f5-46e8-a0e8-3d26c4d28535\") " pod="calico-system/whisker-77fb68b4b8-6g7q8" Dec 15 09:02:01.476960 kubelet[2779]: I1215 09:02:01.476961 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96d2f283-85f5-46e8-a0e8-3d26c4d28535-whisker-ca-bundle\") pod \"whisker-77fb68b4b8-6g7q8\" (UID: \"96d2f283-85f5-46e8-a0e8-3d26c4d28535\") " pod="calico-system/whisker-77fb68b4b8-6g7q8" Dec 15 09:02:01.477049 kubelet[2779]: I1215 09:02:01.476980 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnz5c\" (UniqueName: \"kubernetes.io/projected/96d2f283-85f5-46e8-a0e8-3d26c4d28535-kube-api-access-dnz5c\") pod \"whisker-77fb68b4b8-6g7q8\" (UID: \"96d2f283-85f5-46e8-a0e8-3d26c4d28535\") " pod="calico-system/whisker-77fb68b4b8-6g7q8" Dec 15 09:02:01.477331 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 15 09:02:01.508144 containerd[1611]: time="2025-12-15T09:02:01.508091238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756d6c8cc-j8csg,Uid:8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"40583d57c0c1b541ae9159aa49e8200ccf6680d1bb19071943ed4226a32af8c8\"" Dec 15 09:02:01.509579 containerd[1611]: time="2025-12-15T09:02:01.509546942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 15 09:02:01.604858 containerd[1611]: time="2025-12-15T09:02:01.604737418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77fb68b4b8-6g7q8,Uid:96d2f283-85f5-46e8-a0e8-3d26c4d28535,Namespace:calico-system,Attempt:0,}" Dec 15 09:02:01.704666 systemd-networkd[1316]: cali3a1fcb25a9c: Link UP Dec 15 09:02:01.705416 systemd-networkd[1316]: cali3a1fcb25a9c: Gained carrier Dec 15 09:02:01.720117 containerd[1611]: time="2025-12-15T09:02:01.720070771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756d6c8cc-7vpbv,Uid:4740f6ee-54b4-4ea9-b846-2cfc949dbd68,Namespace:calico-apiserver,Attempt:0,}" Dec 15 09:02:01.720577 containerd[1611]: 2025-12-15 09:02:01.634 [INFO][4103] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 15 09:02:01.720577 containerd[1611]: 2025-12-15 09:02:01.645 [INFO][4103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0 whisker-77fb68b4b8- calico-system 96d2f283-85f5-46e8-a0e8-3d26c4d28535 1021 0 2025-12-15 09:02:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77fb68b4b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77fb68b4b8-6g7q8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3a1fcb25a9c [] [] }} ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Namespace="calico-system" Pod="whisker-77fb68b4b8-6g7q8" WorkloadEndpoint="localhost-k8s-whisker--77fb68b4b8--6g7q8-" Dec 15 09:02:01.720577 containerd[1611]: 2025-12-15 09:02:01.645 [INFO][4103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Namespace="calico-system" Pod="whisker-77fb68b4b8-6g7q8" WorkloadEndpoint="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" Dec 15 09:02:01.720577 containerd[1611]: 2025-12-15 09:02:01.671 [INFO][4117] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" HandleID="k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Workload="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.671 [INFO][4117] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" HandleID="k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Workload="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c60c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77fb68b4b8-6g7q8", "timestamp":"2025-12-15 09:02:01.671044712 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.671 [INFO][4117] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.671 [INFO][4117] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.671 [INFO][4117] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.678 [INFO][4117] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" host="localhost" Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.681 [INFO][4117] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.685 [INFO][4117] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.687 [INFO][4117] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.689 [INFO][4117] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:01.720707 containerd[1611]: 2025-12-15 09:02:01.689 [INFO][4117] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" host="localhost" Dec 15 09:02:01.720978 containerd[1611]: 2025-12-15 09:02:01.690 [INFO][4117] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7 Dec 15 09:02:01.720978 containerd[1611]: 2025-12-15 09:02:01.694 [INFO][4117] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" host="localhost" Dec 15 09:02:01.720978 containerd[1611]: 2025-12-15 09:02:01.699 [INFO][4117] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" host="localhost" Dec 15 09:02:01.720978 containerd[1611]: 2025-12-15 09:02:01.699 [INFO][4117] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" host="localhost" Dec 15 09:02:01.720978 containerd[1611]: 2025-12-15 09:02:01.699 [INFO][4117] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 15 09:02:01.720978 containerd[1611]: 2025-12-15 09:02:01.699 [INFO][4117] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" HandleID="k8s-pod-network.4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Workload="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" Dec 15 09:02:01.721099 containerd[1611]: 2025-12-15 09:02:01.702 [INFO][4103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Namespace="calico-system" Pod="whisker-77fb68b4b8-6g7q8" WorkloadEndpoint="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0", GenerateName:"whisker-77fb68b4b8-", Namespace:"calico-system", SelfLink:"", UID:"96d2f283-85f5-46e8-a0e8-3d26c4d28535", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77fb68b4b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77fb68b4b8-6g7q8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3a1fcb25a9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:01.721099 containerd[1611]: 2025-12-15 09:02:01.702 [INFO][4103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Namespace="calico-system" Pod="whisker-77fb68b4b8-6g7q8" WorkloadEndpoint="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" Dec 15 09:02:01.721186 containerd[1611]: 2025-12-15 09:02:01.703 [INFO][4103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a1fcb25a9c ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Namespace="calico-system" Pod="whisker-77fb68b4b8-6g7q8" WorkloadEndpoint="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" Dec 15 09:02:01.721186 containerd[1611]: 2025-12-15 09:02:01.704 [INFO][4103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Namespace="calico-system" Pod="whisker-77fb68b4b8-6g7q8" WorkloadEndpoint="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" Dec 15 09:02:01.721233 containerd[1611]: 2025-12-15 09:02:01.706 [INFO][4103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Namespace="calico-system" Pod="whisker-77fb68b4b8-6g7q8" WorkloadEndpoint="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0", GenerateName:"whisker-77fb68b4b8-", Namespace:"calico-system", SelfLink:"", UID:"96d2f283-85f5-46e8-a0e8-3d26c4d28535", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77fb68b4b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7", Pod:"whisker-77fb68b4b8-6g7q8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3a1fcb25a9c", MAC:"aa:a0:41:68:17:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:01.721285 containerd[1611]: 2025-12-15 09:02:01.716 [INFO][4103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" Namespace="calico-system" Pod="whisker-77fb68b4b8-6g7q8" WorkloadEndpoint="localhost-k8s-whisker--77fb68b4b8--6g7q8-eth0" Dec 15 09:02:01.721285 containerd[1611]: time="2025-12-15T09:02:01.720407235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qg84l,Uid:47f37f66-a36c-44b9-8447-de6c1cff5809,Namespace:calico-system,Attempt:0,}" Dec 15 09:02:01.721285 containerd[1611]: time="2025-12-15T09:02:01.720510460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9jfj,Uid:1c5ab256-136a-4105-b959-3f278aa6f144,Namespace:calico-system,Attempt:0,}" Dec 15 09:02:01.759419 containerd[1611]: time="2025-12-15T09:02:01.759358815Z" level=info msg="connecting to shim 4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7" address="unix:///run/containerd/s/352bf8236a890fa39f2bc92be7d31764e8e6ca5569932290d1fc712f69f4ad74" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:02:01.801009 systemd[1]: Started cri-containerd-4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7.scope - libcontainer container 4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7. Dec 15 09:02:01.820000 audit: BPF prog-id=184 op=LOAD Dec 15 09:02:01.821000 audit: BPF prog-id=185 op=LOAD Dec 15 09:02:01.821000 audit[4192]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4177 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462393435643035613564616164323937643861643031633030376338 Dec 15 09:02:01.822000 audit: BPF prog-id=185 op=UNLOAD Dec 15 09:02:01.822000 audit[4192]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4177 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462393435643035613564616164323937643861643031633030376338 Dec 15 09:02:01.822000 audit: BPF prog-id=186 op=LOAD Dec 15 09:02:01.822000 audit[4192]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4177 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462393435643035613564616164323937643861643031633030376338 Dec 15 09:02:01.823000 audit: BPF prog-id=187 op=LOAD Dec 15 09:02:01.823000 audit[4192]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4177 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462393435643035613564616164323937643861643031633030376338 Dec 15 09:02:01.823000 audit: BPF prog-id=187 op=UNLOAD Dec 15 09:02:01.823000 audit[4192]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4177 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462393435643035613564616164323937643861643031633030376338 Dec 15 09:02:01.823000 audit: BPF prog-id=186 op=UNLOAD Dec 15 09:02:01.823000 audit[4192]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4177 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462393435643035613564616164323937643861643031633030376338 Dec 15 09:02:01.824000 audit: BPF prog-id=188 op=LOAD Dec 15 09:02:01.824000 audit[4192]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4177 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:01.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462393435643035613564616164323937643861643031633030376338 Dec 15 09:02:01.831214 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 15 09:02:01.834069 kubelet[2779]: E1215 09:02:01.834010 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:01.846504 containerd[1611]: time="2025-12-15T09:02:01.846453091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:02.099098 containerd[1611]: time="2025-12-15T09:02:02.098981786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 15 09:02:02.099098 containerd[1611]: time="2025-12-15T09:02:02.099065793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:02.099355 kubelet[2779]: E1215 09:02:02.099241 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:02.099355 kubelet[2779]: E1215 09:02:02.099285 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:02.103899 kubelet[2779]: E1215 09:02:02.103840 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggr54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5756d6c8cc-j8csg_calico-apiserver(8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:02.105020 kubelet[2779]: E1215 09:02:02.104995 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" podUID="8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd" Dec 15 09:02:02.275826 containerd[1611]: time="2025-12-15T09:02:02.275758246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77fb68b4b8-6g7q8,Uid:96d2f283-85f5-46e8-a0e8-3d26c4d28535,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b945d05a5daad297d8ad01c007c8af4925741782c22dc36b12d3deb5bcb25a7\"" Dec 15 09:02:02.277152 containerd[1611]: time="2025-12-15T09:02:02.277123529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 15 09:02:02.696976 systemd-networkd[1316]: cali1077caedc3c: Gained IPv6LL Dec 15 09:02:02.719526 kubelet[2779]: E1215 09:02:02.719474 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:02.719814 kubelet[2779]: E1215 09:02:02.719683 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:02.719895 containerd[1611]: time="2025-12-15T09:02:02.719823385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f9f87d6-hpvt5,Uid:5b0886e8-4f46-439e-a9a8-bea45b864b25,Namespace:calico-system,Attempt:0,}" Dec 15 09:02:02.720010 containerd[1611]: time="2025-12-15T09:02:02.719930577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ggrbh,Uid:4a3713b5-5b2b-432f-a2d2-a9138faef31f,Namespace:kube-system,Attempt:0,}" Dec 15 09:02:02.720010 containerd[1611]: time="2025-12-15T09:02:02.719985971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dmm9f,Uid:5d2dcae9-0cbf-434e-ac9e-e764a111e542,Namespace:kube-system,Attempt:0,}" Dec 15 09:02:02.836536 kubelet[2779]: E1215 09:02:02.836489 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" podUID="8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd" Dec 15 09:02:02.938528 kubelet[2779]: I1215 09:02:02.938490 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d673cd1-b1e0-408c-b153-e7bc92b80142" path="/var/lib/kubelet/pods/6d673cd1-b1e0-408c-b153-e7bc92b80142/volumes" Dec 15 09:02:03.272011 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:44960.service - OpenSSH per-connection server daemon (10.0.0.1:44960). Dec 15 09:02:03.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.128:22-10.0.0.1:44960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:03.278415 kernel: kauditd_printk_skb: 66 callbacks suppressed Dec 15 09:02:03.278486 kernel: audit: type=1130 audit(1765789323.270:603): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.128:22-10.0.0.1:44960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:03.336976 systemd-networkd[1316]: cali3a1fcb25a9c: Gained IPv6LL Dec 15 09:02:03.357000 audit[4375]: USER_ACCT pid=4375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:03.359961 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 44960 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:03.362922 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:03.370403 kernel: audit: type=1101 audit(1765789323.357:604): pid=4375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:03.370511 kernel: audit: type=1103 audit(1765789323.359:605): pid=4375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:03.359000 audit[4375]: CRED_ACQ pid=4375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:03.368369 systemd-logind[1586]: New session 10 of user core. Dec 15 09:02:03.359000 audit[4375]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe39a8d320 a2=3 a3=0 items=0 ppid=1 pid=4375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:03.378858 kernel: audit: type=1006 audit(1765789323.359:606): pid=4375 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 15 09:02:03.378925 kernel: audit: type=1300 audit(1765789323.359:606): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe39a8d320 a2=3 a3=0 items=0 ppid=1 pid=4375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:03.378980 kernel: audit: type=1327 audit(1765789323.359:606): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:03.359000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:03.387217 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 15 09:02:03.389000 audit[4375]: USER_START pid=4375 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:03.393000 audit[4379]: CRED_ACQ pid=4379 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:03.402735 kernel: audit: type=1105 audit(1765789323.389:607): pid=4375 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:03.402783 kernel: audit: type=1103 audit(1765789323.393:608): pid=4379 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:03.498046 containerd[1611]: time="2025-12-15T09:02:03.497995247Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:03.996122 sshd[4379]: Connection closed by 10.0.0.1 port 44960 Dec 15 09:02:03.996557 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:03.996000 audit[4375]: USER_END pid=4375 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:04.003516 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Dec 15 09:02:04.003696 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:44960.service: Deactivated successfully. Dec 15 09:02:03.996000 audit[4375]: CRED_DISP pid=4375 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:04.006079 systemd[1]: session-10.scope: Deactivated successfully. Dec 15 09:02:04.007798 systemd-logind[1586]: Removed session 10. Dec 15 09:02:04.008851 kernel: audit: type=1106 audit(1765789323.996:609): pid=4375 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:04.008920 kernel: audit: type=1104 audit(1765789323.996:610): pid=4375 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:04.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.128:22-10.0.0.1:44960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:04.090619 systemd-networkd[1316]: calia300e31c4bb: Link UP Dec 15 09:02:04.090871 systemd-networkd[1316]: calia300e31c4bb: Gained carrier Dec 15 09:02:04.234340 containerd[1611]: time="2025-12-15T09:02:04.234243720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 15 09:02:04.234465 containerd[1611]: time="2025-12-15T09:02:04.234290288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:04.234598 kubelet[2779]: E1215 09:02:04.234520 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 15 09:02:04.234598 kubelet[2779]: E1215 09:02:04.234572 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 15 09:02:04.235024 kubelet[2779]: E1215 09:02:04.234694 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5a068dc8facc4dbe9f1ecf18e1d1e8f5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77fb68b4b8-6g7q8_calico-system(96d2f283-85f5-46e8-a0e8-3d26c4d28535): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:04.236654 containerd[1611]: time="2025-12-15T09:02:04.236624747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 15 09:02:04.380121 containerd[1611]: 2025-12-15 09:02:01.761 [INFO][4135] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 15 09:02:04.380121 containerd[1611]: 2025-12-15 09:02:01.773 [INFO][4135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qg84l-eth0 csi-node-driver- calico-system 47f37f66-a36c-44b9-8447-de6c1cff5809 774 0 2025-12-15 09:01:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qg84l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia300e31c4bb [] [] }} ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Namespace="calico-system" Pod="csi-node-driver-qg84l" WorkloadEndpoint="localhost-k8s-csi--node--driver--qg84l-" Dec 15 09:02:04.380121 containerd[1611]: 2025-12-15 09:02:01.773 [INFO][4135] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Namespace="calico-system" Pod="csi-node-driver-qg84l" WorkloadEndpoint="localhost-k8s-csi--node--driver--qg84l-eth0" Dec 15 09:02:04.380121 containerd[1611]: 2025-12-15 09:02:01.816 [INFO][4207] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" HandleID="k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Workload="localhost-k8s-csi--node--driver--qg84l-eth0" Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:01.816 [INFO][4207] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" HandleID="k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Workload="localhost-k8s-csi--node--driver--qg84l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034c120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qg84l", "timestamp":"2025-12-15 09:02:01.816609388 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:01.816 [INFO][4207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:01.816 [INFO][4207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:01.816 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:01.825 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" host="localhost" Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:01.828 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:01.834 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:01.837 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:02.007 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.380343 containerd[1611]: 2025-12-15 09:02:02.007 [INFO][4207] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" host="localhost" Dec 15 09:02:04.380621 containerd[1611]: 2025-12-15 09:02:02.460 [INFO][4207] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e Dec 15 09:02:04.380621 containerd[1611]: 2025-12-15 09:02:03.511 [INFO][4207] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" host="localhost" Dec 15 09:02:04.380621 containerd[1611]: 2025-12-15 09:02:04.085 [INFO][4207] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" host="localhost" Dec 15 09:02:04.380621 containerd[1611]: 2025-12-15 09:02:04.085 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" host="localhost" Dec 15 09:02:04.380621 containerd[1611]: 2025-12-15 09:02:04.085 [INFO][4207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 15 09:02:04.380621 containerd[1611]: 2025-12-15 09:02:04.085 [INFO][4207] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" HandleID="k8s-pod-network.d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Workload="localhost-k8s-csi--node--driver--qg84l-eth0" Dec 15 09:02:04.380787 containerd[1611]: 2025-12-15 09:02:04.088 [INFO][4135] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Namespace="calico-system" Pod="csi-node-driver-qg84l" WorkloadEndpoint="localhost-k8s-csi--node--driver--qg84l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qg84l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47f37f66-a36c-44b9-8447-de6c1cff5809", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qg84l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia300e31c4bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.380872 containerd[1611]: 2025-12-15 09:02:04.088 [INFO][4135] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Namespace="calico-system" Pod="csi-node-driver-qg84l" WorkloadEndpoint="localhost-k8s-csi--node--driver--qg84l-eth0" Dec 15 09:02:04.380872 containerd[1611]: 2025-12-15 09:02:04.088 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia300e31c4bb ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Namespace="calico-system" Pod="csi-node-driver-qg84l" WorkloadEndpoint="localhost-k8s-csi--node--driver--qg84l-eth0" Dec 15 09:02:04.380872 containerd[1611]: 2025-12-15 09:02:04.090 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Namespace="calico-system" Pod="csi-node-driver-qg84l" WorkloadEndpoint="localhost-k8s-csi--node--driver--qg84l-eth0" Dec 15 09:02:04.380949 containerd[1611]: 2025-12-15 09:02:04.090 [INFO][4135] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Namespace="calico-system" Pod="csi-node-driver-qg84l" WorkloadEndpoint="localhost-k8s-csi--node--driver--qg84l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qg84l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47f37f66-a36c-44b9-8447-de6c1cff5809", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e", Pod:"csi-node-driver-qg84l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia300e31c4bb", MAC:"02:37:b1:ed:db:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.381014 containerd[1611]: 2025-12-15 09:02:04.372 [INFO][4135] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" Namespace="calico-system" Pod="csi-node-driver-qg84l" WorkloadEndpoint="localhost-k8s-csi--node--driver--qg84l-eth0" Dec 15 09:02:04.402000 audit[4422]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:04.402000 audit[4422]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff67f6e830 a2=0 a3=7fff67f6e81c items=0 ppid=2893 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:04.408000 audit[4422]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:04.408000 audit[4422]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff67f6e830 a2=0 a3=0 items=0 ppid=2893 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:04.424723 containerd[1611]: time="2025-12-15T09:02:04.424570384Z" level=info msg="connecting to shim d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e" address="unix:///run/containerd/s/1310894c9876d6423258d0b39b3f52d15fbf4d7aa317c0e13c4ab8501a332a50" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:02:04.458111 systemd[1]: Started cri-containerd-d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e.scope - libcontainer container d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e. Dec 15 09:02:04.470000 audit: BPF prog-id=189 op=LOAD Dec 15 09:02:04.471000 audit: BPF prog-id=190 op=LOAD Dec 15 09:02:04.471000 audit[4444]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4431 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363263306365636431616636353032623934353562393261613332 Dec 15 09:02:04.471000 audit: BPF prog-id=190 op=UNLOAD Dec 15 09:02:04.471000 audit[4444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4431 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363263306365636431616636353032623934353562393261613332 Dec 15 09:02:04.471000 audit: BPF prog-id=191 op=LOAD Dec 15 09:02:04.471000 audit[4444]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4431 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363263306365636431616636353032623934353562393261613332 Dec 15 09:02:04.471000 audit: BPF prog-id=192 op=LOAD Dec 15 09:02:04.471000 audit[4444]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4431 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363263306365636431616636353032623934353562393261613332 Dec 15 09:02:04.472000 audit: BPF prog-id=192 op=UNLOAD Dec 15 09:02:04.472000 audit[4444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4431 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363263306365636431616636353032623934353562393261613332 Dec 15 09:02:04.472000 audit: BPF prog-id=191 op=UNLOAD Dec 15 09:02:04.472000 audit[4444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4431 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363263306365636431616636353032623934353562393261613332 Dec 15 09:02:04.472000 audit: BPF prog-id=193 op=LOAD Dec 15 09:02:04.472000 audit[4444]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4431 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432363263306365636431616636353032623934353562393261613332 Dec 15 09:02:04.475239 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 15 09:02:04.494593 containerd[1611]: time="2025-12-15T09:02:04.494524329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qg84l,Uid:47f37f66-a36c-44b9-8447-de6c1cff5809,Namespace:calico-system,Attempt:0,} returns sandbox id \"d262c0cecd1af6502b9455b92aa32903bee7d3da97509e7ac4031f9ead1b105e\"" Dec 15 09:02:04.533354 systemd-networkd[1316]: cali3a89a9a3933: Link UP Dec 15 09:02:04.535394 systemd-networkd[1316]: cali3a89a9a3933: Gained carrier Dec 15 09:02:04.541000 audit: BPF prog-id=194 op=LOAD Dec 15 09:02:04.541000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe875a0db0 a2=98 a3=1fffffffffffffff items=0 ppid=4297 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.541000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 15 09:02:04.542000 audit: BPF prog-id=194 op=UNLOAD Dec 15 09:02:04.542000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe875a0d80 a3=0 items=0 ppid=4297 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.542000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 15 09:02:04.542000 audit: BPF prog-id=195 op=LOAD Dec 15 09:02:04.542000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe875a0c90 a2=94 a3=3 items=0 ppid=4297 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.542000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 15 09:02:04.542000 audit: BPF prog-id=195 op=UNLOAD Dec 15 09:02:04.542000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe875a0c90 a2=94 a3=3 items=0 ppid=4297 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.542000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 15 09:02:04.542000 audit: BPF prog-id=196 op=LOAD Dec 15 09:02:04.542000 audit[4480]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe875a0cd0 a2=94 a3=7ffe875a0eb0 items=0 ppid=4297 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.542000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 15 09:02:04.543000 audit: BPF prog-id=196 op=UNLOAD Dec 15 09:02:04.543000 audit[4480]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe875a0cd0 a2=94 a3=7ffe875a0eb0 items=0 ppid=4297 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.543000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 15 09:02:04.545000 audit: BPF prog-id=197 op=LOAD Dec 15 09:02:04.545000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe5f05daa0 a2=98 a3=3 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.545000 audit: BPF prog-id=197 op=UNLOAD Dec 15 09:02:04.545000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe5f05da70 a3=0 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.545000 audit: BPF prog-id=198 op=LOAD Dec 15 09:02:04.545000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5f05d890 a2=94 a3=54428f items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.545000 audit: BPF prog-id=198 op=UNLOAD Dec 15 09:02:04.545000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5f05d890 a2=94 a3=54428f items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.545000 audit: BPF prog-id=199 op=LOAD Dec 15 09:02:04.545000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5f05d8c0 a2=94 a3=2 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.545000 audit: BPF prog-id=199 op=UNLOAD Dec 15 09:02:04.545000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5f05d8c0 a2=0 a3=2 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.553898 containerd[1611]: 2025-12-15 09:02:01.763 [INFO][4151] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 15 09:02:04.553898 containerd[1611]: 2025-12-15 09:02:01.784 [INFO][4151] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--k9jfj-eth0 goldmane-666569f655- calico-system 1c5ab256-136a-4105-b959-3f278aa6f144 896 0 2025-12-15 09:01:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-k9jfj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3a89a9a3933 [] [] }} ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Namespace="calico-system" Pod="goldmane-666569f655-k9jfj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9jfj-" Dec 15 09:02:04.553898 containerd[1611]: 2025-12-15 09:02:01.784 [INFO][4151] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Namespace="calico-system" Pod="goldmane-666569f655-k9jfj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9jfj-eth0" Dec 15 09:02:04.553898 containerd[1611]: 2025-12-15 09:02:01.823 [INFO][4215] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" HandleID="k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Workload="localhost-k8s-goldmane--666569f655--k9jfj-eth0" Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:01.823 [INFO][4215] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" HandleID="k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Workload="localhost-k8s-goldmane--666569f655--k9jfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-k9jfj", "timestamp":"2025-12-15 09:02:01.823050279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:01.823 [INFO][4215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:04.085 [INFO][4215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:04.086 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:04.372 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" host="localhost" Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:04.492 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:04.498 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:04.500 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:04.504 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.554356 containerd[1611]: 2025-12-15 09:02:04.504 [INFO][4215] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" host="localhost" Dec 15 09:02:04.554665 containerd[1611]: 2025-12-15 09:02:04.505 [INFO][4215] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8 Dec 15 09:02:04.554665 containerd[1611]: 2025-12-15 09:02:04.510 [INFO][4215] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" host="localhost" Dec 15 09:02:04.554665 containerd[1611]: 2025-12-15 09:02:04.518 [INFO][4215] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" host="localhost" Dec 15 09:02:04.554665 containerd[1611]: 2025-12-15 09:02:04.519 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" host="localhost" Dec 15 09:02:04.554665 containerd[1611]: 2025-12-15 09:02:04.519 [INFO][4215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 15 09:02:04.554665 containerd[1611]: 2025-12-15 09:02:04.519 [INFO][4215] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" HandleID="k8s-pod-network.dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Workload="localhost-k8s-goldmane--666569f655--k9jfj-eth0" Dec 15 09:02:04.554861 containerd[1611]: 2025-12-15 09:02:04.526 [INFO][4151] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Namespace="calico-system" Pod="goldmane-666569f655-k9jfj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9jfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--k9jfj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1c5ab256-136a-4105-b959-3f278aa6f144", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-k9jfj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3a89a9a3933", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.554861 containerd[1611]: 2025-12-15 09:02:04.526 [INFO][4151] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Namespace="calico-system" Pod="goldmane-666569f655-k9jfj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9jfj-eth0" Dec 15 09:02:04.554960 containerd[1611]: 2025-12-15 09:02:04.526 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a89a9a3933 ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Namespace="calico-system" Pod="goldmane-666569f655-k9jfj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9jfj-eth0" Dec 15 09:02:04.554960 containerd[1611]: 2025-12-15 09:02:04.537 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Namespace="calico-system" Pod="goldmane-666569f655-k9jfj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9jfj-eth0" Dec 15 09:02:04.555003 containerd[1611]: 2025-12-15 09:02:04.539 [INFO][4151] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Namespace="calico-system" Pod="goldmane-666569f655-k9jfj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9jfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--k9jfj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1c5ab256-136a-4105-b959-3f278aa6f144", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8", Pod:"goldmane-666569f655-k9jfj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3a89a9a3933", MAC:"5e:4c:fc:a0:41:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.555049 containerd[1611]: 2025-12-15 09:02:04.549 [INFO][4151] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" Namespace="calico-system" Pod="goldmane-666569f655-k9jfj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9jfj-eth0" Dec 15 09:02:04.581960 containerd[1611]: time="2025-12-15T09:02:04.581893399Z" level=info msg="connecting to shim dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8" address="unix:///run/containerd/s/8862e22973dd0a1fb6d5786e75b42b3ab645a672edd9d0a729cad2bb9fc8e758" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:02:04.628971 systemd[1]: Started cri-containerd-dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8.scope - libcontainer container dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8. Dec 15 09:02:04.630779 systemd-networkd[1316]: cali16669f2cdc3: Link UP Dec 15 09:02:04.634930 systemd-networkd[1316]: cali16669f2cdc3: Gained carrier Dec 15 09:02:04.654191 containerd[1611]: 2025-12-15 09:02:01.764 [INFO][4132] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 15 09:02:04.654191 containerd[1611]: 2025-12-15 09:02:01.785 [INFO][4132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0 calico-apiserver-5756d6c8cc- calico-apiserver 4740f6ee-54b4-4ea9-b846-2cfc949dbd68 889 0 2025-12-15 09:01:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5756d6c8cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5756d6c8cc-7vpbv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali16669f2cdc3 [] [] }} ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-7vpbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-" Dec 15 09:02:04.654191 containerd[1611]: 2025-12-15 09:02:01.785 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-7vpbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" Dec 15 09:02:04.654191 containerd[1611]: 2025-12-15 09:02:01.824 [INFO][4209] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" HandleID="k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Workload="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:01.825 [INFO][4209] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" HandleID="k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Workload="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002254e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5756d6c8cc-7vpbv", "timestamp":"2025-12-15 09:02:01.824159019 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:01.825 [INFO][4209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:04.519 [INFO][4209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:04.519 [INFO][4209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:04.525 [INFO][4209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" host="localhost" Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:04.594 [INFO][4209] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:04.601 [INFO][4209] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:04.603 [INFO][4209] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:04.608 [INFO][4209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.654669 containerd[1611]: 2025-12-15 09:02:04.609 [INFO][4209] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" host="localhost" Dec 15 09:02:04.655274 containerd[1611]: 2025-12-15 09:02:04.612 [INFO][4209] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93 Dec 15 09:02:04.655274 containerd[1611]: 2025-12-15 09:02:04.618 [INFO][4209] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" host="localhost" Dec 15 09:02:04.655274 containerd[1611]: 2025-12-15 09:02:04.624 [INFO][4209] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" host="localhost" Dec 15 09:02:04.655274 containerd[1611]: 2025-12-15 09:02:04.624 [INFO][4209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" host="localhost" Dec 15 09:02:04.655274 containerd[1611]: 2025-12-15 09:02:04.624 [INFO][4209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 15 09:02:04.655274 containerd[1611]: 2025-12-15 09:02:04.624 [INFO][4209] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" HandleID="k8s-pod-network.361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Workload="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" Dec 15 09:02:04.655509 containerd[1611]: 2025-12-15 09:02:04.628 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-7vpbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0", GenerateName:"calico-apiserver-5756d6c8cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4740f6ee-54b4-4ea9-b846-2cfc949dbd68", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5756d6c8cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5756d6c8cc-7vpbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16669f2cdc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.655569 containerd[1611]: 2025-12-15 09:02:04.628 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-7vpbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" Dec 15 09:02:04.655569 containerd[1611]: 2025-12-15 09:02:04.628 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16669f2cdc3 ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-7vpbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" Dec 15 09:02:04.655569 containerd[1611]: 2025-12-15 09:02:04.635 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-7vpbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" Dec 15 09:02:04.655659 containerd[1611]: 2025-12-15 09:02:04.639 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-7vpbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0", GenerateName:"calico-apiserver-5756d6c8cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4740f6ee-54b4-4ea9-b846-2cfc949dbd68", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5756d6c8cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93", Pod:"calico-apiserver-5756d6c8cc-7vpbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16669f2cdc3", MAC:"da:b9:4d:03:59:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.655708 containerd[1611]: 2025-12-15 09:02:04.650 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" Namespace="calico-apiserver" Pod="calico-apiserver-5756d6c8cc-7vpbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756d6c8cc--7vpbv-eth0" Dec 15 09:02:04.655000 audit: BPF prog-id=200 op=LOAD Dec 15 09:02:04.657000 audit: BPF prog-id=201 op=LOAD Dec 15 09:02:04.657000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4497 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464326331313134643737303366353735353235613062646262366439 Dec 15 09:02:04.657000 audit: BPF prog-id=201 op=UNLOAD Dec 15 09:02:04.657000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464326331313134643737303366353735353235613062646262366439 Dec 15 09:02:04.657000 audit: BPF prog-id=202 op=LOAD Dec 15 09:02:04.657000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4497 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464326331313134643737303366353735353235613062646262366439 Dec 15 09:02:04.657000 audit: BPF prog-id=203 op=LOAD Dec 15 09:02:04.657000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4497 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464326331313134643737303366353735353235613062646262366439 Dec 15 09:02:04.657000 audit: BPF prog-id=203 op=UNLOAD Dec 15 09:02:04.657000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464326331313134643737303366353735353235613062646262366439 Dec 15 09:02:04.657000 audit: BPF prog-id=202 op=UNLOAD Dec 15 09:02:04.657000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464326331313134643737303366353735353235613062646262366439 Dec 15 09:02:04.657000 audit: BPF prog-id=204 op=LOAD Dec 15 09:02:04.657000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4497 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464326331313134643737303366353735353235613062646262366439 Dec 15 09:02:04.661114 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 15 09:02:04.738292 containerd[1611]: time="2025-12-15T09:02:04.738249221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9jfj,Uid:1c5ab256-136a-4105-b959-3f278aa6f144,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd2c1114d7703f575525a0bdbb6d9824abf83834212b80292d00504bd93282b8\"" Dec 15 09:02:04.744504 containerd[1611]: time="2025-12-15T09:02:04.744444966Z" level=info msg="connecting to shim 361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93" address="unix:///run/containerd/s/38a83cc2a14337a7631a7f125afaeff2e0813df9c458f48db06514a7e9d986c3" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:02:04.765215 containerd[1611]: time="2025-12-15T09:02:04.765037538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:04.766242 containerd[1611]: time="2025-12-15T09:02:04.766214566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 15 09:02:04.766374 containerd[1611]: time="2025-12-15T09:02:04.766359680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:04.767843 kubelet[2779]: E1215 09:02:04.766634 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 15 09:02:04.768058 kubelet[2779]: E1215 09:02:04.767964 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 15 09:02:04.768310 kubelet[2779]: E1215 09:02:04.768256 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77fb68b4b8-6g7q8_calico-system(96d2f283-85f5-46e8-a0e8-3d26c4d28535): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:04.769001 containerd[1611]: time="2025-12-15T09:02:04.768978074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 15 09:02:04.769473 kubelet[2779]: E1215 09:02:04.769420 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77fb68b4b8-6g7q8" podUID="96d2f283-85f5-46e8-a0e8-3d26c4d28535" Dec 15 09:02:04.772043 systemd[1]: Started cri-containerd-361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93.scope - libcontainer container 361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93. Dec 15 09:02:04.816000 audit: BPF prog-id=205 op=LOAD Dec 15 09:02:04.817000 audit: BPF prog-id=206 op=LOAD Dec 15 09:02:04.817000 audit[4614]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4603 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336316430396365633366653361653735373361336532373131306339 Dec 15 09:02:04.817000 audit: BPF prog-id=206 op=UNLOAD Dec 15 09:02:04.817000 audit[4614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4603 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336316430396365633366653361653735373361336532373131306339 Dec 15 09:02:04.817000 audit: BPF prog-id=207 op=LOAD Dec 15 09:02:04.817000 audit[4614]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4603 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336316430396365633366653361653735373361336532373131306339 Dec 15 09:02:04.817000 audit: BPF prog-id=208 op=LOAD Dec 15 09:02:04.817000 audit[4614]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4603 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336316430396365633366653361653735373361336532373131306339 Dec 15 09:02:04.817000 audit: BPF prog-id=208 op=UNLOAD Dec 15 09:02:04.817000 audit[4614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4603 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336316430396365633366653361653735373361336532373131306339 Dec 15 09:02:04.817000 audit: BPF prog-id=207 op=UNLOAD Dec 15 09:02:04.817000 audit[4614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4603 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336316430396365633366653361653735373361336532373131306339 Dec 15 09:02:04.817000 audit: BPF prog-id=209 op=LOAD Dec 15 09:02:04.817000 audit[4614]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4603 pid=4614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336316430396365633366653361653735373361336532373131306339 Dec 15 09:02:04.820531 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 15 09:02:04.820000 audit: BPF prog-id=210 op=LOAD Dec 15 09:02:04.820000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5f05d780 a2=94 a3=1 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.820000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.820000 audit: BPF prog-id=210 op=UNLOAD Dec 15 09:02:04.820000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5f05d780 a2=94 a3=1 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.820000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.833000 audit: BPF prog-id=211 op=LOAD Dec 15 09:02:04.833000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5f05d770 a2=94 a3=4 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.833000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.833000 audit: BPF prog-id=211 op=UNLOAD Dec 15 09:02:04.833000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe5f05d770 a2=0 a3=4 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.833000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.833000 audit: BPF prog-id=212 op=LOAD Dec 15 09:02:04.833000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe5f05d5d0 a2=94 a3=5 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.833000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.833000 audit: BPF prog-id=212 op=UNLOAD Dec 15 09:02:04.833000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe5f05d5d0 a2=0 a3=5 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.833000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.833000 audit: BPF prog-id=213 op=LOAD Dec 15 09:02:04.833000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5f05d7f0 a2=94 a3=6 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.833000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.833000 audit: BPF prog-id=213 op=UNLOAD Dec 15 09:02:04.833000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe5f05d7f0 a2=0 a3=6 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.833000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.835869 systemd-networkd[1316]: calidb16aa03229: Link UP Dec 15 09:02:04.836083 systemd-networkd[1316]: calidb16aa03229: Gained carrier Dec 15 09:02:04.837000 audit: BPF prog-id=214 op=LOAD Dec 15 09:02:04.837000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5f05cfa0 a2=94 a3=88 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.837000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.838000 audit: BPF prog-id=215 op=LOAD Dec 15 09:02:04.838000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe5f05ce20 a2=94 a3=2 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.838000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.838000 audit: BPF prog-id=215 op=UNLOAD Dec 15 09:02:04.838000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe5f05ce50 a2=0 a3=7ffe5f05cf50 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.838000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.838000 audit: BPF prog-id=214 op=UNLOAD Dec 15 09:02:04.838000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1a309d10 a2=0 a3=b699f2b673ff2eb0 items=0 ppid=4297 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.838000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 15 09:02:04.850722 containerd[1611]: 2025-12-15 09:02:04.724 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0 coredns-674b8bbfcf- kube-system 4a3713b5-5b2b-432f-a2d2-a9138faef31f 894 0 2025-12-15 09:01:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ggrbh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb16aa03229 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggrbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ggrbh-" Dec 15 09:02:04.850722 containerd[1611]: 2025-12-15 09:02:04.724 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggrbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" Dec 15 09:02:04.850722 containerd[1611]: 2025-12-15 09:02:04.788 [INFO][4596] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" HandleID="k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Workload="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.788 [INFO][4596] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" HandleID="k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Workload="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ecf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ggrbh", "timestamp":"2025-12-15 09:02:04.788616338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.788 [INFO][4596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.789 [INFO][4596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.789 [INFO][4596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.795 [INFO][4596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" host="localhost" Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.804 [INFO][4596] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.808 [INFO][4596] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.810 [INFO][4596] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.812 [INFO][4596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.850983 containerd[1611]: 2025-12-15 09:02:04.812 [INFO][4596] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" host="localhost" Dec 15 09:02:04.851211 containerd[1611]: 2025-12-15 09:02:04.813 [INFO][4596] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6 Dec 15 09:02:04.851211 containerd[1611]: 2025-12-15 09:02:04.818 [INFO][4596] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" host="localhost" Dec 15 09:02:04.851211 containerd[1611]: 2025-12-15 09:02:04.823 [INFO][4596] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" host="localhost" Dec 15 09:02:04.851211 containerd[1611]: 2025-12-15 09:02:04.823 [INFO][4596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" host="localhost" Dec 15 09:02:04.851211 containerd[1611]: 2025-12-15 09:02:04.823 [INFO][4596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 15 09:02:04.851211 containerd[1611]: 2025-12-15 09:02:04.823 [INFO][4596] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" HandleID="k8s-pod-network.a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Workload="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" Dec 15 09:02:04.851326 containerd[1611]: 2025-12-15 09:02:04.829 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggrbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4a3713b5-5b2b-432f-a2d2-a9138faef31f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ggrbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb16aa03229", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.851380 containerd[1611]: 2025-12-15 09:02:04.829 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggrbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" Dec 15 09:02:04.851380 containerd[1611]: 2025-12-15 09:02:04.829 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb16aa03229 ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggrbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" Dec 15 09:02:04.851380 containerd[1611]: 2025-12-15 09:02:04.836 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggrbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" Dec 15 09:02:04.851446 containerd[1611]: 2025-12-15 09:02:04.836 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggrbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4a3713b5-5b2b-432f-a2d2-a9138faef31f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6", Pod:"coredns-674b8bbfcf-ggrbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb16aa03229", MAC:"a6:19:13:b7:0b:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.851446 containerd[1611]: 2025-12-15 09:02:04.845 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" Namespace="kube-system" Pod="coredns-674b8bbfcf-ggrbh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ggrbh-eth0" Dec 15 09:02:04.857000 audit: BPF prog-id=216 op=LOAD Dec 15 09:02:04.857000 audit[4666]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3efbd920 a2=98 a3=1999999999999999 items=0 ppid=4297 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.857000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 15 09:02:04.857000 audit: BPF prog-id=216 op=UNLOAD Dec 15 09:02:04.857000 audit[4666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe3efbd8f0 a3=0 items=0 ppid=4297 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.857000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 15 09:02:04.857000 audit: BPF prog-id=217 op=LOAD Dec 15 09:02:04.857000 audit[4666]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3efbd800 a2=94 a3=ffff items=0 ppid=4297 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.857000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 15 09:02:04.857000 audit: BPF prog-id=217 op=UNLOAD Dec 15 09:02:04.857000 audit[4666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe3efbd800 a2=94 a3=ffff items=0 ppid=4297 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.857000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 15 09:02:04.857000 audit: BPF prog-id=218 op=LOAD Dec 15 09:02:04.857000 audit[4666]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3efbd840 a2=94 a3=7ffe3efbda20 items=0 ppid=4297 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.857000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 15 09:02:04.857000 audit: BPF prog-id=218 op=UNLOAD Dec 15 09:02:04.857000 audit[4666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe3efbd840 a2=94 a3=7ffe3efbda20 items=0 ppid=4297 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.857000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 15 09:02:04.860593 kubelet[2779]: E1215 09:02:04.860453 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77fb68b4b8-6g7q8" podUID="96d2f283-85f5-46e8-a0e8-3d26c4d28535" Dec 15 09:02:04.870569 containerd[1611]: time="2025-12-15T09:02:04.870437955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756d6c8cc-7vpbv,Uid:4740f6ee-54b4-4ea9-b846-2cfc949dbd68,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"361d09cec3fe3ae7573a3e27110c96f2f996730eac95abbbdb551e14f3173c93\"" Dec 15 09:02:04.893927 containerd[1611]: time="2025-12-15T09:02:04.893713373Z" level=info msg="connecting to shim a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6" address="unix:///run/containerd/s/d8e05a7fdf54dbfa1c937924957d48618eadfcd3c5ca9c38af1b69d4503a6fed" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:02:04.893000 audit[4695]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:04.893000 audit[4695]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf17919a0 a2=0 a3=7ffdf179198c items=0 ppid=2893 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:04.899000 audit[4695]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:04.899000 audit[4695]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf17919a0 a2=0 a3=0 items=0 ppid=2893 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:04.921259 systemd[1]: Started cri-containerd-a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6.scope - libcontainer container a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6. Dec 15 09:02:04.933000 audit: BPF prog-id=219 op=LOAD Dec 15 09:02:04.934000 audit: BPF prog-id=220 op=LOAD Dec 15 09:02:04.934000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4694 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313565396333313632333866343235383930633530663832373266 Dec 15 09:02:04.934000 audit: BPF prog-id=220 op=UNLOAD Dec 15 09:02:04.934000 audit[4706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4694 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313565396333313632333866343235383930633530663832373266 Dec 15 09:02:04.934000 audit: BPF prog-id=221 op=LOAD Dec 15 09:02:04.934000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4694 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313565396333313632333866343235383930633530663832373266 Dec 15 09:02:04.935000 audit: BPF prog-id=222 op=LOAD Dec 15 09:02:04.935000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4694 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313565396333313632333866343235383930633530663832373266 Dec 15 09:02:04.935000 audit: BPF prog-id=222 op=UNLOAD Dec 15 09:02:04.935000 audit[4706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4694 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313565396333313632333866343235383930633530663832373266 Dec 15 09:02:04.935000 audit: BPF prog-id=221 op=UNLOAD Dec 15 09:02:04.935000 audit[4706]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4694 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313565396333313632333866343235383930633530663832373266 Dec 15 09:02:04.935000 audit: BPF prog-id=223 op=LOAD Dec 15 09:02:04.935000 audit[4706]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4694 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:04.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138313565396333313632333866343235383930633530663832373266 Dec 15 09:02:04.937955 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 15 09:02:04.958254 systemd-networkd[1316]: calid7451d7fd90: Link UP Dec 15 09:02:04.959463 systemd-networkd[1316]: calid7451d7fd90: Gained carrier Dec 15 09:02:04.971701 systemd-networkd[1316]: vxlan.calico: Link UP Dec 15 09:02:04.971712 systemd-networkd[1316]: vxlan.calico: Gained carrier Dec 15 09:02:04.991551 containerd[1611]: time="2025-12-15T09:02:04.989486139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ggrbh,Uid:4a3713b5-5b2b-432f-a2d2-a9138faef31f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6\"" Dec 15 09:02:04.993671 kubelet[2779]: E1215 09:02:04.993652 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.744 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0 calico-kube-controllers-86f9f87d6- calico-system 5b0886e8-4f46-439e-a9a8-bea45b864b25 897 0 2025-12-15 09:01:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86f9f87d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86f9f87d6-hpvt5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid7451d7fd90 [] [] }} ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Namespace="calico-system" Pod="calico-kube-controllers-86f9f87d6-hpvt5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.748 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Namespace="calico-system" Pod="calico-kube-controllers-86f9f87d6-hpvt5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.804 [INFO][4634] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" HandleID="k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Workload="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.804 [INFO][4634] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" HandleID="k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Workload="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86f9f87d6-hpvt5", "timestamp":"2025-12-15 09:02:04.804246721 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.804 [INFO][4634] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.823 [INFO][4634] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.824 [INFO][4634] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.902 [INFO][4634] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.923 [INFO][4634] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.928 [INFO][4634] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.930 [INFO][4634] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.932 [INFO][4634] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.932 [INFO][4634] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.933 [INFO][4634] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.937 [INFO][4634] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.948 [INFO][4634] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.948 [INFO][4634] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" host="localhost" Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.948 [INFO][4634] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 15 09:02:04.996592 containerd[1611]: 2025-12-15 09:02:04.948 [INFO][4634] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" HandleID="k8s-pod-network.04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Workload="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" Dec 15 09:02:04.997272 containerd[1611]: 2025-12-15 09:02:04.951 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Namespace="calico-system" Pod="calico-kube-controllers-86f9f87d6-hpvt5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0", GenerateName:"calico-kube-controllers-86f9f87d6-", Namespace:"calico-system", SelfLink:"", UID:"5b0886e8-4f46-439e-a9a8-bea45b864b25", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f9f87d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86f9f87d6-hpvt5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid7451d7fd90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.997272 containerd[1611]: 2025-12-15 09:02:04.952 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Namespace="calico-system" Pod="calico-kube-controllers-86f9f87d6-hpvt5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" Dec 15 09:02:04.997272 containerd[1611]: 2025-12-15 09:02:04.952 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7451d7fd90 ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Namespace="calico-system" Pod="calico-kube-controllers-86f9f87d6-hpvt5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" Dec 15 09:02:04.997272 containerd[1611]: 2025-12-15 09:02:04.960 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Namespace="calico-system" Pod="calico-kube-controllers-86f9f87d6-hpvt5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" Dec 15 09:02:04.997272 containerd[1611]: 2025-12-15 09:02:04.960 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Namespace="calico-system" Pod="calico-kube-controllers-86f9f87d6-hpvt5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0", GenerateName:"calico-kube-controllers-86f9f87d6-", Namespace:"calico-system", SelfLink:"", UID:"5b0886e8-4f46-439e-a9a8-bea45b864b25", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f9f87d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb", Pod:"calico-kube-controllers-86f9f87d6-hpvt5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid7451d7fd90", MAC:"f2:3e:6a:e2:09:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:04.997272 containerd[1611]: 2025-12-15 09:02:04.980 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" Namespace="calico-system" Pod="calico-kube-controllers-86f9f87d6-hpvt5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f9f87d6--hpvt5-eth0" Dec 15 09:02:04.999360 containerd[1611]: time="2025-12-15T09:02:04.999332141Z" level=info msg="CreateContainer within sandbox \"a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 15 09:02:05.022121 containerd[1611]: time="2025-12-15T09:02:05.022031771Z" level=info msg="Container 5d21778842ec248a897eb4757749bf4135b393f857a776952d796cd48f61e4ad: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:02:05.027631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912786294.mount: Deactivated successfully. Dec 15 09:02:05.035044 containerd[1611]: time="2025-12-15T09:02:05.034998802Z" level=info msg="CreateContainer within sandbox \"a815e9c316238f425890c50f8272f87366be61d237e50562a8d86ed90bbdcff6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d21778842ec248a897eb4757749bf4135b393f857a776952d796cd48f61e4ad\"" Dec 15 09:02:05.036823 containerd[1611]: time="2025-12-15T09:02:05.036769469Z" level=info msg="StartContainer for \"5d21778842ec248a897eb4757749bf4135b393f857a776952d796cd48f61e4ad\"" Dec 15 09:02:05.038053 containerd[1611]: time="2025-12-15T09:02:05.038008372Z" level=info msg="connecting to shim 5d21778842ec248a897eb4757749bf4135b393f857a776952d796cd48f61e4ad" address="unix:///run/containerd/s/d8e05a7fdf54dbfa1c937924957d48618eadfcd3c5ca9c38af1b69d4503a6fed" protocol=ttrpc version=3 Dec 15 09:02:05.041914 containerd[1611]: time="2025-12-15T09:02:05.041860911Z" level=info msg="connecting to shim 04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb" address="unix:///run/containerd/s/8701bb1ef7313f0080edf6bc582803ebe58ffbbb9035b792ad023279b63ceea1" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:02:05.064189 systemd-networkd[1316]: cali5955519a5de: Link UP Dec 15 09:02:05.065215 systemd-networkd[1316]: cali5955519a5de: Gained carrier Dec 15 09:02:05.075602 systemd[1]: Started cri-containerd-5d21778842ec248a897eb4757749bf4135b393f857a776952d796cd48f61e4ad.scope - libcontainer container 5d21778842ec248a897eb4757749bf4135b393f857a776952d796cd48f61e4ad. Dec 15 09:02:05.079000 audit: BPF prog-id=224 op=LOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee20d88d0 a2=98 a3=0 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=224 op=UNLOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffee20d88a0 a3=0 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=225 op=LOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee20d86e0 a2=94 a3=54428f items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=225 op=UNLOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffee20d86e0 a2=94 a3=54428f items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=226 op=LOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee20d8710 a2=94 a3=2 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=226 op=UNLOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffee20d8710 a2=0 a3=2 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=227 op=LOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffee20d84c0 a2=94 a3=4 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=227 op=UNLOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffee20d84c0 a2=94 a3=4 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=228 op=LOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffee20d85c0 a2=94 a3=7ffee20d8740 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.079000 audit: BPF prog-id=228 op=UNLOAD Dec 15 09:02:05.079000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffee20d85c0 a2=0 a3=7ffee20d8740 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.083000 audit: BPF prog-id=229 op=LOAD Dec 15 09:02:05.083000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffee20d7cf0 a2=94 a3=2 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.083000 audit: BPF prog-id=229 op=UNLOAD Dec 15 09:02:05.083000 audit[4799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffee20d7cf0 a2=0 a3=2 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.083000 audit: BPF prog-id=230 op=LOAD Dec 15 09:02:05.083000 audit[4799]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffee20d7df0 a2=94 a3=30 items=0 ppid=4297 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:04.745 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0 coredns-674b8bbfcf- kube-system 5d2dcae9-0cbf-434e-ac9e-e764a111e542 892 0 2025-12-15 09:01:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dmm9f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5955519a5de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Namespace="kube-system" Pod="coredns-674b8bbfcf-dmm9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dmm9f-" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:04.745 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Namespace="kube-system" Pod="coredns-674b8bbfcf-dmm9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:04.805 [INFO][4616] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" HandleID="k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Workload="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:04.806 [INFO][4616] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" HandleID="k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Workload="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ddae0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dmm9f", "timestamp":"2025-12-15 09:02:04.805744663 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:04.806 [INFO][4616] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:04.948 [INFO][4616] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:04.948 [INFO][4616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:04.999 [INFO][4616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.025 [INFO][4616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.031 [INFO][4616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.033 [INFO][4616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.036 [INFO][4616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.036 [INFO][4616] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.039 [INFO][4616] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645 Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.045 [INFO][4616] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.051 [INFO][4616] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.051 [INFO][4616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" host="localhost" Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.051 [INFO][4616] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 15 09:02:05.092544 containerd[1611]: 2025-12-15 09:02:05.051 [INFO][4616] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" HandleID="k8s-pod-network.e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Workload="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" Dec 15 09:02:05.093431 containerd[1611]: 2025-12-15 09:02:05.055 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Namespace="kube-system" Pod="coredns-674b8bbfcf-dmm9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5d2dcae9-0cbf-434e-ac9e-e764a111e542", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dmm9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5955519a5de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:05.093431 containerd[1611]: 2025-12-15 09:02:05.056 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Namespace="kube-system" Pod="coredns-674b8bbfcf-dmm9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" Dec 15 09:02:05.093431 containerd[1611]: 2025-12-15 09:02:05.056 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5955519a5de ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Namespace="kube-system" Pod="coredns-674b8bbfcf-dmm9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" Dec 15 09:02:05.093431 containerd[1611]: 2025-12-15 09:02:05.066 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Namespace="kube-system" Pod="coredns-674b8bbfcf-dmm9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" Dec 15 09:02:05.093431 containerd[1611]: 2025-12-15 09:02:05.067 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Namespace="kube-system" Pod="coredns-674b8bbfcf-dmm9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5d2dcae9-0cbf-434e-ac9e-e764a111e542", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.December, 15, 9, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645", Pod:"coredns-674b8bbfcf-dmm9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5955519a5de", MAC:"ce:2f:f5:69:01:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 15 09:02:05.093431 containerd[1611]: 2025-12-15 09:02:05.077 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" Namespace="kube-system" Pod="coredns-674b8bbfcf-dmm9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dmm9f-eth0" Dec 15 09:02:05.092000 audit: BPF prog-id=231 op=LOAD Dec 15 09:02:05.092000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe18225d50 a2=98 a3=0 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.092000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.093000 audit: BPF prog-id=231 op=UNLOAD Dec 15 09:02:05.093000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe18225d20 a3=0 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.093000 audit: BPF prog-id=232 op=LOAD Dec 15 09:02:05.093000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe18225b40 a2=94 a3=54428f items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.093000 audit: BPF prog-id=232 op=UNLOAD Dec 15 09:02:05.093000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe18225b40 a2=94 a3=54428f items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.096173 systemd[1]: Started cri-containerd-04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb.scope - libcontainer container 04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb. Dec 15 09:02:05.095000 audit: BPF prog-id=233 op=LOAD Dec 15 09:02:05.095000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe18225b70 a2=94 a3=2 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.095000 audit: BPF prog-id=233 op=UNLOAD Dec 15 09:02:05.095000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe18225b70 a2=0 a3=2 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.095000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.109000 audit: BPF prog-id=234 op=LOAD Dec 15 09:02:05.110000 audit: BPF prog-id=235 op=LOAD Dec 15 09:02:05.110000 audit[4762]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4694 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323137373838343265633234386138393765623437353737343962 Dec 15 09:02:05.110000 audit: BPF prog-id=235 op=UNLOAD Dec 15 09:02:05.110000 audit[4762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4694 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323137373838343265633234386138393765623437353737343962 Dec 15 09:02:05.110000 audit: BPF prog-id=236 op=LOAD Dec 15 09:02:05.110000 audit[4762]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4694 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323137373838343265633234386138393765623437353737343962 Dec 15 09:02:05.110000 audit: BPF prog-id=237 op=LOAD Dec 15 09:02:05.110000 audit[4762]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4694 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323137373838343265633234386138393765623437353737343962 Dec 15 09:02:05.111000 audit: BPF prog-id=237 op=UNLOAD Dec 15 09:02:05.111000 audit[4762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4694 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323137373838343265633234386138393765623437353737343962 Dec 15 09:02:05.111000 audit: BPF prog-id=236 op=UNLOAD Dec 15 09:02:05.111000 audit[4762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4694 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323137373838343265633234386138393765623437353737343962 Dec 15 09:02:05.111000 audit: BPF prog-id=238 op=LOAD Dec 15 09:02:05.111000 audit: BPF prog-id=239 op=LOAD Dec 15 09:02:05.111000 audit[4762]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4694 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323137373838343265633234386138393765623437353737343962 Dec 15 09:02:05.112000 audit: BPF prog-id=240 op=LOAD Dec 15 09:02:05.112000 audit[4786]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4761 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034656136633530353231323631306430653832623862373366663737 Dec 15 09:02:05.112000 audit: BPF prog-id=240 op=UNLOAD Dec 15 09:02:05.112000 audit[4786]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034656136633530353231323631306430653832623862373366663737 Dec 15 09:02:05.113000 audit: BPF prog-id=241 op=LOAD Dec 15 09:02:05.113000 audit[4786]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4761 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034656136633530353231323631306430653832623862373366663737 Dec 15 09:02:05.113000 audit: BPF prog-id=242 op=LOAD Dec 15 09:02:05.113000 audit[4786]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4761 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034656136633530353231323631306430653832623862373366663737 Dec 15 09:02:05.113000 audit: BPF prog-id=242 op=UNLOAD Dec 15 09:02:05.113000 audit[4786]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034656136633530353231323631306430653832623862373366663737 Dec 15 09:02:05.114000 audit: BPF prog-id=241 op=UNLOAD Dec 15 09:02:05.114000 audit[4786]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4761 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034656136633530353231323631306430653832623862373366663737 Dec 15 09:02:05.114000 audit: BPF prog-id=243 op=LOAD Dec 15 09:02:05.114000 audit[4786]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4761 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034656136633530353231323631306430653832623862373366663737 Dec 15 09:02:05.117417 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 15 09:02:05.160900 containerd[1611]: time="2025-12-15T09:02:05.160787324Z" level=info msg="StartContainer for \"5d21778842ec248a897eb4757749bf4135b393f857a776952d796cd48f61e4ad\" returns successfully" Dec 15 09:02:05.162675 containerd[1611]: time="2025-12-15T09:02:05.162533985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f9f87d6-hpvt5,Uid:5b0886e8-4f46-439e-a9a8-bea45b864b25,Namespace:calico-system,Attempt:0,} returns sandbox id \"04ea6c505212610d0e82b8b73ff770e0a3c56fac82875d7d2d79ae5ea31a28cb\"" Dec 15 09:02:05.182087 containerd[1611]: time="2025-12-15T09:02:05.181695127Z" level=info msg="connecting to shim e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645" address="unix:///run/containerd/s/0d935e01a4967797845231b58ea0a877a41e4f26375cdd97415f8613e83ca9bc" namespace=k8s.io protocol=ttrpc version=3 Dec 15 09:02:05.204997 containerd[1611]: time="2025-12-15T09:02:05.204721862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:05.214321 containerd[1611]: time="2025-12-15T09:02:05.214274570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 15 09:02:05.214569 containerd[1611]: time="2025-12-15T09:02:05.214432789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:05.214817 kubelet[2779]: E1215 09:02:05.214767 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 15 09:02:05.214960 kubelet[2779]: E1215 09:02:05.214938 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 15 09:02:05.215895 kubelet[2779]: E1215 09:02:05.215422 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfccg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qg84l_calico-system(47f37f66-a36c-44b9-8447-de6c1cff5809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:05.216460 containerd[1611]: time="2025-12-15T09:02:05.216396599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 15 09:02:05.217351 systemd[1]: Started cri-containerd-e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645.scope - libcontainer container e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645. Dec 15 09:02:05.234000 audit: BPF prog-id=244 op=LOAD Dec 15 09:02:05.234000 audit: BPF prog-id=245 op=LOAD Dec 15 09:02:05.234000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4852 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623966303866613132376164653463373930316663656565323266 Dec 15 09:02:05.234000 audit: BPF prog-id=245 op=UNLOAD Dec 15 09:02:05.234000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4852 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623966303866613132376164653463373930316663656565323266 Dec 15 09:02:05.235000 audit: BPF prog-id=246 op=LOAD Dec 15 09:02:05.235000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4852 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623966303866613132376164653463373930316663656565323266 Dec 15 09:02:05.235000 audit: BPF prog-id=247 op=LOAD Dec 15 09:02:05.235000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4852 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623966303866613132376164653463373930316663656565323266 Dec 15 09:02:05.235000 audit: BPF prog-id=247 op=UNLOAD Dec 15 09:02:05.235000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4852 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623966303866613132376164653463373930316663656565323266 Dec 15 09:02:05.235000 audit: BPF prog-id=246 op=UNLOAD Dec 15 09:02:05.235000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4852 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623966303866613132376164653463373930316663656565323266 Dec 15 09:02:05.235000 audit: BPF prog-id=248 op=LOAD Dec 15 09:02:05.235000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4852 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535623966303866613132376164653463373930316663656565323266 Dec 15 09:02:05.238770 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 15 09:02:05.318043 containerd[1611]: time="2025-12-15T09:02:05.317990004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dmm9f,Uid:5d2dcae9-0cbf-434e-ac9e-e764a111e542,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645\"" Dec 15 09:02:05.320055 kubelet[2779]: E1215 09:02:05.319982 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:05.325929 containerd[1611]: time="2025-12-15T09:02:05.325865382Z" level=info msg="CreateContainer within sandbox \"e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 15 09:02:05.331000 audit: BPF prog-id=249 op=LOAD Dec 15 09:02:05.331000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe18225a30 a2=94 a3=1 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.331000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.331000 audit: BPF prog-id=249 op=UNLOAD Dec 15 09:02:05.331000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe18225a30 a2=94 a3=1 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.331000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.337357 containerd[1611]: time="2025-12-15T09:02:05.337319222Z" level=info msg="Container 276f97c5ba86a4546ab0553f00c958daa60633b895f08b7eff8b24c4216f12aa: CDI devices from CRI Config.CDIDevices: []" Dec 15 09:02:05.340000 audit: BPF prog-id=250 op=LOAD Dec 15 09:02:05.340000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe18225a20 a2=94 a3=4 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.340000 audit: BPF prog-id=250 op=UNLOAD Dec 15 09:02:05.340000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe18225a20 a2=0 a3=4 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.340000 audit: BPF prog-id=251 op=LOAD Dec 15 09:02:05.340000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe18225880 a2=94 a3=5 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.340000 audit: BPF prog-id=251 op=UNLOAD Dec 15 09:02:05.340000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe18225880 a2=0 a3=5 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.340000 audit: BPF prog-id=252 op=LOAD Dec 15 09:02:05.340000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe18225aa0 a2=94 a3=6 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.340000 audit: BPF prog-id=252 op=UNLOAD Dec 15 09:02:05.340000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe18225aa0 a2=0 a3=6 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.340000 audit: BPF prog-id=253 op=LOAD Dec 15 09:02:05.340000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe18225250 a2=94 a3=88 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.340000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.341000 audit: BPF prog-id=254 op=LOAD Dec 15 09:02:05.341000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe182250d0 a2=94 a3=2 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.341000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.341000 audit: BPF prog-id=254 op=UNLOAD Dec 15 09:02:05.341000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe18225100 a2=0 a3=7ffe18225200 items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.341000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.342000 audit: BPF prog-id=253 op=UNLOAD Dec 15 09:02:05.342000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=15910d10 a2=0 a3=d9629dad1521c83e items=0 ppid=4297 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 15 09:02:05.344626 containerd[1611]: time="2025-12-15T09:02:05.344529117Z" level=info msg="CreateContainer within sandbox \"e5b9f08fa127ade4c7901fceee22f0b9f5b3d1d23de4aa4f766c1db0422e8645\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"276f97c5ba86a4546ab0553f00c958daa60633b895f08b7eff8b24c4216f12aa\"" Dec 15 09:02:05.345474 containerd[1611]: time="2025-12-15T09:02:05.345379970Z" level=info msg="StartContainer for \"276f97c5ba86a4546ab0553f00c958daa60633b895f08b7eff8b24c4216f12aa\"" Dec 15 09:02:05.346761 containerd[1611]: time="2025-12-15T09:02:05.346741355Z" level=info msg="connecting to shim 276f97c5ba86a4546ab0553f00c958daa60633b895f08b7eff8b24c4216f12aa" address="unix:///run/containerd/s/0d935e01a4967797845231b58ea0a877a41e4f26375cdd97415f8613e83ca9bc" protocol=ttrpc version=3 Dec 15 09:02:05.350000 audit: BPF prog-id=230 op=UNLOAD Dec 15 09:02:05.350000 audit[4297]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00095e440 a2=0 a3=0 items=0 ppid=4288 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.350000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 15 09:02:05.371010 systemd[1]: Started cri-containerd-276f97c5ba86a4546ab0553f00c958daa60633b895f08b7eff8b24c4216f12aa.scope - libcontainer container 276f97c5ba86a4546ab0553f00c958daa60633b895f08b7eff8b24c4216f12aa. Dec 15 09:02:05.383000 audit: BPF prog-id=255 op=LOAD Dec 15 09:02:05.384000 audit: BPF prog-id=256 op=LOAD Dec 15 09:02:05.384000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4852 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237366639376335626138366134353436616230353533663030633935 Dec 15 09:02:05.384000 audit: BPF prog-id=256 op=UNLOAD Dec 15 09:02:05.384000 audit[4892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4852 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237366639376335626138366134353436616230353533663030633935 Dec 15 09:02:05.384000 audit: BPF prog-id=257 op=LOAD Dec 15 09:02:05.384000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4852 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237366639376335626138366134353436616230353533663030633935 Dec 15 09:02:05.384000 audit: BPF prog-id=258 op=LOAD Dec 15 09:02:05.384000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4852 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237366639376335626138366134353436616230353533663030633935 Dec 15 09:02:05.384000 audit: BPF prog-id=258 op=UNLOAD Dec 15 09:02:05.384000 audit[4892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4852 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237366639376335626138366134353436616230353533663030633935 Dec 15 09:02:05.384000 audit: BPF prog-id=257 op=UNLOAD Dec 15 09:02:05.384000 audit[4892]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4852 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237366639376335626138366134353436616230353533663030633935 Dec 15 09:02:05.384000 audit: BPF prog-id=259 op=LOAD Dec 15 09:02:05.384000 audit[4892]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4852 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237366639376335626138366134353436616230353533663030633935 Dec 15 09:02:05.408769 containerd[1611]: time="2025-12-15T09:02:05.408696998Z" level=info msg="StartContainer for \"276f97c5ba86a4546ab0553f00c958daa60633b895f08b7eff8b24c4216f12aa\" returns successfully" Dec 15 09:02:05.430000 audit[4950]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4950 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 15 09:02:05.430000 audit[4950]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffeaa602340 a2=0 a3=7ffeaa60232c items=0 ppid=4297 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.430000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 15 09:02:05.436000 audit[4954]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4954 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 15 09:02:05.436000 audit[4954]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcd5fa0770 a2=0 a3=7ffcd5fa075c items=0 ppid=4297 pid=4954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 15 09:02:05.439000 audit[4948]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4948 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 15 09:02:05.439000 audit[4948]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc70115940 a2=0 a3=7ffc7011592c items=0 ppid=4297 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.439000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 15 09:02:05.448000 audit[4953]: NETFILTER_CFG table=filter:126 family=2 entries=275 op=nft_register_chain pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 15 09:02:05.448000 audit[4953]: SYSCALL arch=c000003e syscall=46 success=yes exit=161724 a0=3 a1=7ffcf22a75d0 a2=0 a3=7ffcf22a75bc items=0 ppid=4297 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.448000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 15 09:02:05.497000 audit[4968]: NETFILTER_CFG table=filter:127 family=2 entries=78 op=nft_register_chain pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 15 09:02:05.497000 audit[4968]: SYSCALL arch=c000003e syscall=46 success=yes exit=41028 a0=3 a1=7ffda19da7f0 a2=0 a3=7ffda19da7dc items=0 ppid=4297 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:05.497000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 15 09:02:05.513220 systemd-networkd[1316]: calia300e31c4bb: Gained IPv6LL Dec 15 09:02:05.580459 containerd[1611]: time="2025-12-15T09:02:05.580402142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:05.582028 containerd[1611]: time="2025-12-15T09:02:05.581998129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 15 09:02:05.582092 containerd[1611]: time="2025-12-15T09:02:05.582069193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:05.582289 kubelet[2779]: E1215 09:02:05.582249 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 15 09:02:05.582340 kubelet[2779]: E1215 09:02:05.582301 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 15 09:02:05.582637 kubelet[2779]: E1215 09:02:05.582571 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl4g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9jfj_calico-system(1c5ab256-136a-4105-b959-3f278aa6f144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:05.582902 containerd[1611]: time="2025-12-15T09:02:05.582622806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 15 09:02:05.584171 kubelet[2779]: E1215 09:02:05.584127 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9jfj" podUID="1c5ab256-136a-4105-b959-3f278aa6f144" Dec 15 09:02:05.858556 kubelet[2779]: E1215 09:02:05.858470 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:05.862457 kubelet[2779]: E1215 09:02:05.862422 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:05.864729 kubelet[2779]: E1215 09:02:05.864676 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9jfj" podUID="1c5ab256-136a-4105-b959-3f278aa6f144" Dec 15 09:02:05.885739 kubelet[2779]: I1215 09:02:05.885687 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ggrbh" podStartSLOduration=42.88567256 podStartE2EDuration="42.88567256s" podCreationTimestamp="2025-12-15 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 09:02:05.885180644 +0000 UTC m=+49.268952548" watchObservedRunningTime="2025-12-15 09:02:05.88567256 +0000 UTC m=+49.269444474" Dec 15 09:02:05.897047 systemd-networkd[1316]: cali3a89a9a3933: Gained IPv6LL Dec 15 09:02:06.032000 audit[4970]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:06.032000 audit[4970]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffded9822d0 a2=0 a3=7ffded9822bc items=0 ppid=2893 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:06.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:06.037000 audit[4970]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:06.037000 audit[4970]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffded9822d0 a2=0 a3=0 items=0 ppid=2893 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:06.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:06.044199 kubelet[2779]: I1215 09:02:06.043931 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dmm9f" podStartSLOduration=43.043912668 podStartE2EDuration="43.043912668s" podCreationTimestamp="2025-12-15 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 09:02:06.042435665 +0000 UTC m=+49.426207579" watchObservedRunningTime="2025-12-15 09:02:06.043912668 +0000 UTC m=+49.427684582" Dec 15 09:02:06.080837 containerd[1611]: time="2025-12-15T09:02:06.080766981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:06.082047 containerd[1611]: time="2025-12-15T09:02:06.081981378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 15 09:02:06.083820 containerd[1611]: time="2025-12-15T09:02:06.082024790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:06.083874 kubelet[2779]: E1215 09:02:06.082297 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:06.083874 kubelet[2779]: E1215 09:02:06.082345 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:06.083874 kubelet[2779]: E1215 09:02:06.082545 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krf6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5756d6c8cc-7vpbv_calico-apiserver(4740f6ee-54b4-4ea9-b846-2cfc949dbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:06.084555 containerd[1611]: time="2025-12-15T09:02:06.084273727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 15 09:02:06.084710 kubelet[2779]: E1215 09:02:06.084681 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" podUID="4740f6ee-54b4-4ea9-b846-2cfc949dbd68" Dec 15 09:02:06.408977 systemd-networkd[1316]: vxlan.calico: Gained IPv6LL Dec 15 09:02:06.432491 containerd[1611]: time="2025-12-15T09:02:06.432419961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:06.433914 containerd[1611]: time="2025-12-15T09:02:06.433870935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 15 09:02:06.433994 containerd[1611]: time="2025-12-15T09:02:06.433970071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:06.434240 kubelet[2779]: E1215 09:02:06.434182 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 15 09:02:06.434590 kubelet[2779]: E1215 09:02:06.434247 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 15 09:02:06.434617 kubelet[2779]: E1215 09:02:06.434558 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6mdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f9f87d6-hpvt5_calico-system(5b0886e8-4f46-439e-a9a8-bea45b864b25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:06.434893 containerd[1611]: time="2025-12-15T09:02:06.434869385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 15 09:02:06.436002 kubelet[2779]: E1215 09:02:06.435968 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" podUID="5b0886e8-4f46-439e-a9a8-bea45b864b25" Dec 15 09:02:06.473014 systemd-networkd[1316]: calidb16aa03229: Gained IPv6LL Dec 15 09:02:06.600986 systemd-networkd[1316]: cali16669f2cdc3: Gained IPv6LL Dec 15 09:02:06.728978 systemd-networkd[1316]: cali5955519a5de: Gained IPv6LL Dec 15 09:02:06.779508 containerd[1611]: time="2025-12-15T09:02:06.779466754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:06.781039 containerd[1611]: time="2025-12-15T09:02:06.780996175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 15 09:02:06.781039 containerd[1611]: time="2025-12-15T09:02:06.781089500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:06.781327 kubelet[2779]: E1215 09:02:06.781221 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 15 09:02:06.781327 kubelet[2779]: E1215 09:02:06.781273 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 15 09:02:06.781521 kubelet[2779]: E1215 09:02:06.781473 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfccg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qg84l_calico-system(47f37f66-a36c-44b9-8447-de6c1cff5809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:06.782700 kubelet[2779]: E1215 09:02:06.782655 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:02:06.867275 kubelet[2779]: E1215 09:02:06.867017 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" podUID="5b0886e8-4f46-439e-a9a8-bea45b864b25" Dec 15 09:02:06.867472 kubelet[2779]: E1215 09:02:06.867299 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:06.867988 kubelet[2779]: E1215 09:02:06.867944 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:06.868672 kubelet[2779]: E1215 09:02:06.868503 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:02:06.868672 kubelet[2779]: E1215 09:02:06.868632 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" podUID="4740f6ee-54b4-4ea9-b846-2cfc949dbd68" Dec 15 09:02:06.902000 audit[4972]: NETFILTER_CFG table=filter:130 family=2 entries=17 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:06.902000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca6b25fa0 a2=0 a3=7ffca6b25f8c items=0 ppid=2893 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:06.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:06.914000 audit[4972]: NETFILTER_CFG table=nat:131 family=2 entries=47 op=nft_register_chain pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:06.914000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffca6b25fa0 a2=0 a3=7ffca6b25f8c items=0 ppid=2893 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:06.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:06.921031 systemd-networkd[1316]: calid7451d7fd90: Gained IPv6LL Dec 15 09:02:07.868327 kubelet[2779]: E1215 09:02:07.868243 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:07.868327 kubelet[2779]: E1215 09:02:07.868292 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:09.009977 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:44972.service - OpenSSH per-connection server daemon (10.0.0.1:44972). Dec 15 09:02:09.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.128:22-10.0.0.1:44972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:09.015794 kernel: kauditd_printk_skb: 402 callbacks suppressed Dec 15 09:02:09.015962 kernel: audit: type=1130 audit(1765789329.009:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.128:22-10.0.0.1:44972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:09.069000 audit[4978]: USER_ACCT pid=4978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.070469 sshd[4978]: Accepted publickey for core from 10.0.0.1 port 44972 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:09.075000 audit[4978]: CRED_ACQ pid=4978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.077534 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:09.081436 kernel: audit: type=1101 audit(1765789329.069:752): pid=4978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.081509 kernel: audit: type=1103 audit(1765789329.075:753): pid=4978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.081526 kernel: audit: type=1006 audit(1765789329.075:754): pid=4978 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 15 09:02:09.081949 systemd-logind[1586]: New session 11 of user core. Dec 15 09:02:09.075000 audit[4978]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd219a2c90 a2=3 a3=0 items=0 ppid=1 pid=4978 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:09.090259 kernel: audit: type=1300 audit(1765789329.075:754): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd219a2c90 a2=3 a3=0 items=0 ppid=1 pid=4978 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:09.090302 kernel: audit: type=1327 audit(1765789329.075:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:09.075000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:09.094125 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 15 09:02:09.096000 audit[4978]: USER_START pid=4978 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.098000 audit[4983]: CRED_ACQ pid=4983 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.107648 kernel: audit: type=1105 audit(1765789329.096:755): pid=4978 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.107711 kernel: audit: type=1103 audit(1765789329.098:756): pid=4983 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.209326 sshd[4983]: Connection closed by 10.0.0.1 port 44972 Dec 15 09:02:09.210988 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:09.211000 audit[4978]: USER_END pid=4978 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.218865 kernel: audit: type=1106 audit(1765789329.211:757): pid=4978 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.216430 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Dec 15 09:02:09.217165 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:44972.service: Deactivated successfully. Dec 15 09:02:09.211000 audit[4978]: CRED_DISP pid=4978 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:09.219604 systemd[1]: session-11.scope: Deactivated successfully. Dec 15 09:02:09.221712 systemd-logind[1586]: Removed session 11. Dec 15 09:02:09.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.128:22-10.0.0.1:44972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:09.223904 kernel: audit: type=1104 audit(1765789329.211:758): pid=4978 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.227845 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 15 09:02:14.227920 kernel: audit: type=1130 audit(1765789334.221:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.128:22-10.0.0.1:44766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:14.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.128:22-10.0.0.1:44766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:14.222663 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:44766.service - OpenSSH per-connection server daemon (10.0.0.1:44766). Dec 15 09:02:14.286000 audit[5006]: USER_ACCT pid=5006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.287397 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 44766 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:14.290092 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:14.287000 audit[5006]: CRED_ACQ pid=5006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.294742 systemd-logind[1586]: New session 12 of user core. Dec 15 09:02:14.296542 kernel: audit: type=1101 audit(1765789334.286:761): pid=5006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.296578 kernel: audit: type=1103 audit(1765789334.287:762): pid=5006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.296617 kernel: audit: type=1006 audit(1765789334.287:763): pid=5006 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 15 09:02:14.287000 audit[5006]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea8a00d50 a2=3 a3=0 items=0 ppid=1 pid=5006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:14.304429 kernel: audit: type=1300 audit(1765789334.287:763): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea8a00d50 a2=3 a3=0 items=0 ppid=1 pid=5006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:14.304472 kernel: audit: type=1327 audit(1765789334.287:763): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:14.287000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:14.311987 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 15 09:02:14.313000 audit[5006]: USER_START pid=5006 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.315000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.324337 kernel: audit: type=1105 audit(1765789334.313:764): pid=5006 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.324391 kernel: audit: type=1103 audit(1765789334.315:765): pid=5010 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.396002 sshd[5010]: Connection closed by 10.0.0.1 port 44766 Dec 15 09:02:14.396412 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:14.397000 audit[5006]: USER_END pid=5006 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.397000 audit[5006]: CRED_DISP pid=5006 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.408080 kernel: audit: type=1106 audit(1765789334.397:766): pid=5006 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.408125 kernel: audit: type=1104 audit(1765789334.397:767): pid=5006 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.416615 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:44766.service: Deactivated successfully. Dec 15 09:02:14.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.128:22-10.0.0.1:44766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:14.418553 systemd[1]: session-12.scope: Deactivated successfully. Dec 15 09:02:14.419635 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Dec 15 09:02:14.422853 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:44768.service - OpenSSH per-connection server daemon (10.0.0.1:44768). Dec 15 09:02:14.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.128:22-10.0.0.1:44768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:14.423595 systemd-logind[1586]: Removed session 12. Dec 15 09:02:14.485000 audit[5024]: USER_ACCT pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.486835 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 44768 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:14.487000 audit[5024]: CRED_ACQ pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.487000 audit[5024]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd98448dc0 a2=3 a3=0 items=0 ppid=1 pid=5024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:14.487000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:14.489217 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:14.493854 systemd-logind[1586]: New session 13 of user core. Dec 15 09:02:14.502973 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 15 09:02:14.504000 audit[5024]: USER_START pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.506000 audit[5028]: CRED_ACQ pid=5028 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.674003 sshd[5028]: Connection closed by 10.0.0.1 port 44768 Dec 15 09:02:14.674256 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:14.674000 audit[5024]: USER_END pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.674000 audit[5024]: CRED_DISP pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.683547 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:44768.service: Deactivated successfully. Dec 15 09:02:14.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.128:22-10.0.0.1:44768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:14.685572 systemd[1]: session-13.scope: Deactivated successfully. Dec 15 09:02:14.686343 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Dec 15 09:02:14.689253 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:44782.service - OpenSSH per-connection server daemon (10.0.0.1:44782). Dec 15 09:02:14.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.128:22-10.0.0.1:44782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:14.690514 systemd-logind[1586]: Removed session 13. Dec 15 09:02:14.765000 audit[5040]: USER_ACCT pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.766765 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 44782 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:14.767000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.767000 audit[5040]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9b13b770 a2=3 a3=0 items=0 ppid=1 pid=5040 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:14.767000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:14.769087 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:14.773530 systemd-logind[1586]: New session 14 of user core. Dec 15 09:02:14.780970 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 15 09:02:14.782000 audit[5040]: USER_START pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.783000 audit[5044]: CRED_ACQ pid=5044 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.923448 sshd[5044]: Connection closed by 10.0.0.1 port 44782 Dec 15 09:02:14.923760 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:14.925000 audit[5040]: USER_END pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.925000 audit[5040]: CRED_DISP pid=5040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:14.928528 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:44782.service: Deactivated successfully. Dec 15 09:02:14.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.128:22-10.0.0.1:44782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:14.931297 systemd[1]: session-14.scope: Deactivated successfully. Dec 15 09:02:14.932896 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Dec 15 09:02:14.935382 systemd-logind[1586]: Removed session 14. Dec 15 09:02:15.729409 containerd[1611]: time="2025-12-15T09:02:15.729296501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 15 09:02:16.050341 containerd[1611]: time="2025-12-15T09:02:16.050291213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:16.051636 containerd[1611]: time="2025-12-15T09:02:16.051579770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 15 09:02:16.051636 containerd[1611]: time="2025-12-15T09:02:16.051642468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:16.051886 kubelet[2779]: E1215 09:02:16.051801 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:16.051886 kubelet[2779]: E1215 09:02:16.051864 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:16.052338 kubelet[2779]: E1215 09:02:16.052003 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggr54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5756d6c8cc-j8csg_calico-apiserver(8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:16.053174 kubelet[2779]: E1215 09:02:16.053139 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" podUID="8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd" Dec 15 09:02:18.720505 containerd[1611]: time="2025-12-15T09:02:18.720441363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 15 09:02:19.092020 containerd[1611]: time="2025-12-15T09:02:19.091957096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:19.093422 containerd[1611]: time="2025-12-15T09:02:19.093359947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 15 09:02:19.093478 containerd[1611]: time="2025-12-15T09:02:19.093410031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:19.093578 kubelet[2779]: E1215 09:02:19.093541 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:19.093941 kubelet[2779]: E1215 09:02:19.093580 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:19.093941 kubelet[2779]: E1215 09:02:19.093773 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krf6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5756d6c8cc-7vpbv_calico-apiserver(4740f6ee-54b4-4ea9-b846-2cfc949dbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:19.094118 containerd[1611]: time="2025-12-15T09:02:19.093932034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 15 09:02:19.095089 kubelet[2779]: E1215 09:02:19.095052 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" podUID="4740f6ee-54b4-4ea9-b846-2cfc949dbd68" Dec 15 09:02:19.429899 containerd[1611]: time="2025-12-15T09:02:19.429738934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:19.431399 containerd[1611]: time="2025-12-15T09:02:19.431329077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 15 09:02:19.431446 containerd[1611]: time="2025-12-15T09:02:19.431402786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:19.431687 kubelet[2779]: E1215 09:02:19.431632 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 15 09:02:19.431687 kubelet[2779]: E1215 09:02:19.431689 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 15 09:02:19.431939 kubelet[2779]: E1215 09:02:19.431853 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5a068dc8facc4dbe9f1ecf18e1d1e8f5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77fb68b4b8-6g7q8_calico-system(96d2f283-85f5-46e8-a0e8-3d26c4d28535): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:19.433885 containerd[1611]: time="2025-12-15T09:02:19.433849692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 15 09:02:19.775558 containerd[1611]: time="2025-12-15T09:02:19.775517064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:19.776784 containerd[1611]: time="2025-12-15T09:02:19.776730148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 15 09:02:19.776843 containerd[1611]: time="2025-12-15T09:02:19.776745536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:19.777010 kubelet[2779]: E1215 09:02:19.776963 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 15 09:02:19.777055 kubelet[2779]: E1215 09:02:19.777014 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 15 09:02:19.777273 kubelet[2779]: E1215 09:02:19.777226 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77fb68b4b8-6g7q8_calico-system(96d2f283-85f5-46e8-a0e8-3d26c4d28535): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:19.777387 containerd[1611]: time="2025-12-15T09:02:19.777261318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 15 09:02:19.778756 kubelet[2779]: E1215 09:02:19.778677 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77fb68b4b8-6g7q8" podUID="96d2f283-85f5-46e8-a0e8-3d26c4d28535" Dec 15 09:02:19.940019 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:44798.service - OpenSSH per-connection server daemon (10.0.0.1:44798). Dec 15 09:02:19.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.128:22-10.0.0.1:44798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:19.941349 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 15 09:02:19.941422 kernel: audit: type=1130 audit(1765789339.939:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.128:22-10.0.0.1:44798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:20.000000 audit[5071]: USER_ACCT pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.001348 sshd[5071]: Accepted publickey for core from 10.0.0.1 port 44798 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:20.003639 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:20.001000 audit[5071]: CRED_ACQ pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.007858 systemd-logind[1586]: New session 15 of user core. Dec 15 09:02:20.010841 kernel: audit: type=1101 audit(1765789340.000:788): pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.010936 kernel: audit: type=1103 audit(1765789340.001:789): pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.010960 kernel: audit: type=1006 audit(1765789340.001:790): pid=5071 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 15 09:02:20.001000 audit[5071]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9630d6b0 a2=3 a3=0 items=0 ppid=1 pid=5071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:20.018934 kernel: audit: type=1300 audit(1765789340.001:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9630d6b0 a2=3 a3=0 items=0 ppid=1 pid=5071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:20.018972 kernel: audit: type=1327 audit(1765789340.001:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:20.001000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:20.027970 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 15 09:02:20.029000 audit[5071]: USER_START pid=5071 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.031000 audit[5075]: CRED_ACQ pid=5075 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.040767 kernel: audit: type=1105 audit(1765789340.029:791): pid=5071 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.040929 kernel: audit: type=1103 audit(1765789340.031:792): pid=5075 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.096049 sshd[5075]: Connection closed by 10.0.0.1 port 44798 Dec 15 09:02:20.096343 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:20.096000 audit[5071]: USER_END pid=5071 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.101355 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:44798.service: Deactivated successfully. Dec 15 09:02:20.096000 audit[5071]: CRED_DISP pid=5071 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.103569 systemd[1]: session-15.scope: Deactivated successfully. Dec 15 09:02:20.104610 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Dec 15 09:02:20.106039 systemd-logind[1586]: Removed session 15. Dec 15 09:02:20.107654 kernel: audit: type=1106 audit(1765789340.096:793): pid=5071 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.107736 kernel: audit: type=1104 audit(1765789340.096:794): pid=5071 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:20.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.128:22-10.0.0.1:44798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:20.148901 containerd[1611]: time="2025-12-15T09:02:20.148798901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:20.149938 containerd[1611]: time="2025-12-15T09:02:20.149891267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 15 09:02:20.149938 containerd[1611]: time="2025-12-15T09:02:20.149925702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:20.150153 kubelet[2779]: E1215 09:02:20.150114 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 15 09:02:20.150446 kubelet[2779]: E1215 09:02:20.150166 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 15 09:02:20.150446 kubelet[2779]: E1215 09:02:20.150291 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfccg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qg84l_calico-system(47f37f66-a36c-44b9-8447-de6c1cff5809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:20.152625 containerd[1611]: time="2025-12-15T09:02:20.152387797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 15 09:02:20.459140 containerd[1611]: time="2025-12-15T09:02:20.458959769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:20.460237 containerd[1611]: time="2025-12-15T09:02:20.460198982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 15 09:02:20.460313 containerd[1611]: time="2025-12-15T09:02:20.460239568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:20.460452 kubelet[2779]: E1215 09:02:20.460412 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 15 09:02:20.460503 kubelet[2779]: E1215 09:02:20.460462 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 15 09:02:20.460635 kubelet[2779]: E1215 09:02:20.460587 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfccg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qg84l_calico-system(47f37f66-a36c-44b9-8447-de6c1cff5809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:20.461874 kubelet[2779]: E1215 09:02:20.461798 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:02:20.720554 containerd[1611]: time="2025-12-15T09:02:20.720430209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 15 09:02:21.117542 containerd[1611]: time="2025-12-15T09:02:21.117498782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:21.118696 containerd[1611]: time="2025-12-15T09:02:21.118654868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 15 09:02:21.118867 containerd[1611]: time="2025-12-15T09:02:21.118724470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:21.118929 kubelet[2779]: E1215 09:02:21.118887 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 15 09:02:21.118969 kubelet[2779]: E1215 09:02:21.118934 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 15 09:02:21.119126 kubelet[2779]: E1215 09:02:21.119073 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl4g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9jfj_calico-system(1c5ab256-136a-4105-b959-3f278aa6f144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:21.120323 kubelet[2779]: E1215 09:02:21.120278 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9jfj" podUID="1c5ab256-136a-4105-b959-3f278aa6f144" Dec 15 09:02:21.720572 containerd[1611]: time="2025-12-15T09:02:21.720370001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 15 09:02:22.086513 containerd[1611]: time="2025-12-15T09:02:22.086447322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:22.087660 containerd[1611]: time="2025-12-15T09:02:22.087616603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 15 09:02:22.087745 containerd[1611]: time="2025-12-15T09:02:22.087690713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:22.087930 kubelet[2779]: E1215 09:02:22.087883 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 15 09:02:22.088293 kubelet[2779]: E1215 09:02:22.087936 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 15 09:02:22.088293 kubelet[2779]: E1215 09:02:22.088070 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6mdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f9f87d6-hpvt5_calico-system(5b0886e8-4f46-439e-a9a8-bea45b864b25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:22.089320 kubelet[2779]: E1215 09:02:22.089272 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" podUID="5b0886e8-4f46-439e-a9a8-bea45b864b25" Dec 15 09:02:25.120767 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:40724.service - OpenSSH per-connection server daemon (10.0.0.1:40724). Dec 15 09:02:25.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.128:22-10.0.0.1:40724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:25.122204 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 15 09:02:25.122299 kernel: audit: type=1130 audit(1765789345.120:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.128:22-10.0.0.1:40724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:25.179000 audit[5090]: USER_ACCT pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.180785 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 40724 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:25.183351 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:25.181000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.188149 systemd-logind[1586]: New session 16 of user core. Dec 15 09:02:25.191645 kernel: audit: type=1101 audit(1765789345.179:797): pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.191742 kernel: audit: type=1103 audit(1765789345.181:798): pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.191768 kernel: audit: type=1006 audit(1765789345.181:799): pid=5090 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 15 09:02:25.181000 audit[5090]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe48d7c7b0 a2=3 a3=0 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:25.200853 kernel: audit: type=1300 audit(1765789345.181:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe48d7c7b0 a2=3 a3=0 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:25.200879 kernel: audit: type=1327 audit(1765789345.181:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:25.181000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:25.208984 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 15 09:02:25.210000 audit[5090]: USER_START pid=5090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.212000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.222713 kernel: audit: type=1105 audit(1765789345.210:800): pid=5090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.222828 kernel: audit: type=1103 audit(1765789345.212:801): pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.282013 sshd[5094]: Connection closed by 10.0.0.1 port 40724 Dec 15 09:02:25.282282 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:25.282000 audit[5090]: USER_END pid=5090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.287153 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:40724.service: Deactivated successfully. Dec 15 09:02:25.289417 systemd[1]: session-16.scope: Deactivated successfully. Dec 15 09:02:25.282000 audit[5090]: CRED_DISP pid=5090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.290300 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Dec 15 09:02:25.291734 systemd-logind[1586]: Removed session 16. Dec 15 09:02:25.294325 kernel: audit: type=1106 audit(1765789345.282:802): pid=5090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.294391 kernel: audit: type=1104 audit(1765789345.282:803): pid=5090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:25.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.128:22-10.0.0.1:40724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:30.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.128:22-10.0.0.1:35568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:30.295900 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:35568.service - OpenSSH per-connection server daemon (10.0.0.1:35568). Dec 15 09:02:30.297847 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 15 09:02:30.297902 kernel: audit: type=1130 audit(1765789350.295:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.128:22-10.0.0.1:35568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:30.360000 audit[5116]: USER_ACCT pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.361561 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 35568 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:30.364129 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:30.368828 systemd-logind[1586]: New session 17 of user core. Dec 15 09:02:30.362000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.409351 kernel: audit: type=1101 audit(1765789350.360:806): pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.409402 kernel: audit: type=1103 audit(1765789350.362:807): pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.409427 kernel: audit: type=1006 audit(1765789350.362:808): pid=5116 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 15 09:02:30.412527 kernel: audit: type=1300 audit(1765789350.362:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1b674a90 a2=3 a3=0 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:30.362000 audit[5116]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1b674a90 a2=3 a3=0 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:30.362000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:30.420707 kernel: audit: type=1327 audit(1765789350.362:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:30.429988 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 15 09:02:30.431000 audit[5116]: USER_START pid=5116 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.434000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.445012 kernel: audit: type=1105 audit(1765789350.431:809): pid=5116 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.445056 kernel: audit: type=1103 audit(1765789350.434:810): pid=5120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.513307 sshd[5120]: Connection closed by 10.0.0.1 port 35568 Dec 15 09:02:30.513594 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:30.513000 audit[5116]: USER_END pid=5116 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.518598 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:35568.service: Deactivated successfully. Dec 15 09:02:30.520948 systemd[1]: session-17.scope: Deactivated successfully. Dec 15 09:02:30.514000 audit[5116]: CRED_DISP pid=5116 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.521773 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Dec 15 09:02:30.523164 systemd-logind[1586]: Removed session 17. Dec 15 09:02:30.526883 kernel: audit: type=1106 audit(1765789350.513:811): pid=5116 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.526934 kernel: audit: type=1104 audit(1765789350.514:812): pid=5116 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:30.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.128:22-10.0.0.1:35568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:30.722366 kubelet[2779]: E1215 09:02:30.722232 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" podUID="4740f6ee-54b4-4ea9-b846-2cfc949dbd68" Dec 15 09:02:31.720419 kubelet[2779]: E1215 09:02:31.720319 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" podUID="8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd" Dec 15 09:02:31.720955 kubelet[2779]: E1215 09:02:31.720867 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:02:32.006943 kubelet[2779]: E1215 09:02:32.006886 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:33.720943 kubelet[2779]: E1215 09:02:33.720891 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77fb68b4b8-6g7q8" podUID="96d2f283-85f5-46e8-a0e8-3d26c4d28535" Dec 15 09:02:34.720278 kubelet[2779]: E1215 09:02:34.720246 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:34.721075 kubelet[2779]: E1215 09:02:34.720875 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9jfj" podUID="1c5ab256-136a-4105-b959-3f278aa6f144" Dec 15 09:02:34.721412 kubelet[2779]: E1215 09:02:34.721193 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" podUID="5b0886e8-4f46-439e-a9a8-bea45b864b25" Dec 15 09:02:35.525558 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:35572.service - OpenSSH per-connection server daemon (10.0.0.1:35572). Dec 15 09:02:35.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.128:22-10.0.0.1:35572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:35.527429 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 15 09:02:35.527493 kernel: audit: type=1130 audit(1765789355.524:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.128:22-10.0.0.1:35572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:35.602000 audit[5162]: USER_ACCT pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.608836 kernel: audit: type=1101 audit(1765789355.602:815): pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.606035 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:35.609164 sshd[5162]: Accepted publickey for core from 10.0.0.1 port 35572 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:35.603000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.612494 systemd-logind[1586]: New session 18 of user core. Dec 15 09:02:35.616939 kernel: audit: type=1103 audit(1765789355.603:816): pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.617007 kernel: audit: type=1006 audit(1765789355.603:817): pid=5162 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 15 09:02:35.617037 kernel: audit: type=1300 audit(1765789355.603:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8a215b90 a2=3 a3=0 items=0 ppid=1 pid=5162 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:35.603000 audit[5162]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8a215b90 a2=3 a3=0 items=0 ppid=1 pid=5162 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:35.603000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:35.624432 kernel: audit: type=1327 audit(1765789355.603:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:35.626113 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 15 09:02:35.630000 audit[5162]: USER_START pid=5162 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.640865 kernel: audit: type=1105 audit(1765789355.630:818): pid=5162 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.640000 audit[5166]: CRED_ACQ pid=5166 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.645833 kernel: audit: type=1103 audit(1765789355.640:819): pid=5166 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.727058 sshd[5166]: Connection closed by 10.0.0.1 port 35572 Dec 15 09:02:35.727361 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:35.727000 audit[5162]: USER_END pid=5162 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.731756 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:35572.service: Deactivated successfully. Dec 15 09:02:35.734833 kernel: audit: type=1106 audit(1765789355.727:820): pid=5162 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.727000 audit[5162]: CRED_DISP pid=5162 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:35.735054 systemd[1]: session-18.scope: Deactivated successfully. Dec 15 09:02:35.736114 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Dec 15 09:02:35.738012 systemd-logind[1586]: Removed session 18. Dec 15 09:02:35.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.128:22-10.0.0.1:35572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:35.739900 kernel: audit: type=1104 audit(1765789355.727:821): pid=5162 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.744286 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:41066.service - OpenSSH per-connection server daemon (10.0.0.1:41066). Dec 15 09:02:40.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.128:22-10.0.0.1:41066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:40.746622 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 15 09:02:40.746685 kernel: audit: type=1130 audit(1765789360.743:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.128:22-10.0.0.1:41066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:40.817000 audit[5179]: USER_ACCT pid=5179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.822787 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:40.826027 kernel: audit: type=1101 audit(1765789360.817:824): pid=5179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.826066 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 41066 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:40.819000 audit[5179]: CRED_ACQ pid=5179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.830976 systemd-logind[1586]: New session 19 of user core. Dec 15 09:02:40.834645 kernel: audit: type=1103 audit(1765789360.819:825): pid=5179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.834748 kernel: audit: type=1006 audit(1765789360.819:826): pid=5179 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 15 09:02:40.834778 kernel: audit: type=1300 audit(1765789360.819:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc162fc280 a2=3 a3=0 items=0 ppid=1 pid=5179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:40.819000 audit[5179]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc162fc280 a2=3 a3=0 items=0 ppid=1 pid=5179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:40.819000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:40.842457 kernel: audit: type=1327 audit(1765789360.819:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:40.844963 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 15 09:02:40.845000 audit[5179]: USER_START pid=5179 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.858297 kernel: audit: type=1105 audit(1765789360.845:827): pid=5179 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.858376 kernel: audit: type=1103 audit(1765789360.847:828): pid=5183 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.847000 audit[5183]: CRED_ACQ pid=5183 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.943304 sshd[5183]: Connection closed by 10.0.0.1 port 41066 Dec 15 09:02:40.943583 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:40.943000 audit[5179]: USER_END pid=5179 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.943000 audit[5179]: CRED_DISP pid=5179 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.955844 kernel: audit: type=1106 audit(1765789360.943:829): pid=5179 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.955894 kernel: audit: type=1104 audit(1765789360.943:830): pid=5179 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:40.968753 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:41066.service: Deactivated successfully. Dec 15 09:02:40.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.128:22-10.0.0.1:41066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:40.971289 systemd[1]: session-19.scope: Deactivated successfully. Dec 15 09:02:40.972390 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Dec 15 09:02:40.976422 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:41072.service - OpenSSH per-connection server daemon (10.0.0.1:41072). Dec 15 09:02:40.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.128:22-10.0.0.1:41072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:40.977776 systemd-logind[1586]: Removed session 19. Dec 15 09:02:41.055000 audit[5196]: USER_ACCT pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.058989 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 41072 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:41.058000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.058000 audit[5196]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7a8f6a40 a2=3 a3=0 items=0 ppid=1 pid=5196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:41.058000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:41.061249 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:41.072741 systemd-logind[1586]: New session 20 of user core. Dec 15 09:02:41.077087 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 15 09:02:41.078000 audit[5196]: USER_START pid=5196 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.081000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.446886 sshd[5200]: Connection closed by 10.0.0.1 port 41072 Dec 15 09:02:41.447144 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:41.447000 audit[5196]: USER_END pid=5196 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.447000 audit[5196]: CRED_DISP pid=5196 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.459539 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:41072.service: Deactivated successfully. Dec 15 09:02:41.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.128:22-10.0.0.1:41072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:41.461680 systemd[1]: session-20.scope: Deactivated successfully. Dec 15 09:02:41.462517 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Dec 15 09:02:41.465398 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:41078.service - OpenSSH per-connection server daemon (10.0.0.1:41078). Dec 15 09:02:41.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.128:22-10.0.0.1:41078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:41.466096 systemd-logind[1586]: Removed session 20. Dec 15 09:02:41.529000 audit[5212]: USER_ACCT pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.530033 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 41078 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:41.532000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.532000 audit[5212]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbd6e7650 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:41.532000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:41.534600 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:41.541543 systemd-logind[1586]: New session 21 of user core. Dec 15 09:02:41.554964 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 15 09:02:41.556000 audit[5212]: USER_START pid=5212 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.558000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:41.719182 kubelet[2779]: E1215 09:02:41.719054 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:41.721551 containerd[1611]: time="2025-12-15T09:02:41.721241011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 15 09:02:42.080496 containerd[1611]: time="2025-12-15T09:02:42.080314561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:42.088491 containerd[1611]: time="2025-12-15T09:02:42.088445972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 15 09:02:42.089851 kubelet[2779]: E1215 09:02:42.088954 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:42.089851 kubelet[2779]: E1215 09:02:42.089002 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:42.089851 kubelet[2779]: E1215 09:02:42.089142 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krf6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5756d6c8cc-7vpbv_calico-apiserver(4740f6ee-54b4-4ea9-b846-2cfc949dbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:42.090083 containerd[1611]: time="2025-12-15T09:02:42.088645273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:42.090726 kubelet[2779]: E1215 09:02:42.090670 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" podUID="4740f6ee-54b4-4ea9-b846-2cfc949dbd68" Dec 15 09:02:42.165000 audit[5229]: NETFILTER_CFG table=filter:132 family=2 entries=26 op=nft_register_rule pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:42.165000 audit[5229]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffda0e48730 a2=0 a3=7ffda0e4871c items=0 ppid=2893 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:42.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:42.169000 audit[5229]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:42.169000 audit[5229]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffda0e48730 a2=0 a3=0 items=0 ppid=2893 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:42.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:42.174599 sshd[5216]: Connection closed by 10.0.0.1 port 41078 Dec 15 09:02:42.174997 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:42.176000 audit[5212]: USER_END pid=5212 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.176000 audit[5212]: CRED_DISP pid=5212 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.187740 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:41078.service: Deactivated successfully. Dec 15 09:02:42.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.128:22-10.0.0.1:41078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:42.190552 systemd[1]: session-21.scope: Deactivated successfully. Dec 15 09:02:42.192981 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Dec 15 09:02:42.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.128:22-10.0.0.1:41094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:42.199061 systemd[1]: Started sshd@20-10.0.0.128:22-10.0.0.1:41094.service - OpenSSH per-connection server daemon (10.0.0.1:41094). Dec 15 09:02:42.200507 systemd-logind[1586]: Removed session 21. Dec 15 09:02:42.202000 audit[5233]: NETFILTER_CFG table=filter:134 family=2 entries=38 op=nft_register_rule pid=5233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:42.202000 audit[5233]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcadca9690 a2=0 a3=7ffcadca967c items=0 ppid=2893 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:42.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:42.207000 audit[5233]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=5233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:42.207000 audit[5233]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcadca9690 a2=0 a3=0 items=0 ppid=2893 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:42.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:42.284000 audit[5236]: USER_ACCT pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.285548 sshd[5236]: Accepted publickey for core from 10.0.0.1 port 41094 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:42.285000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.285000 audit[5236]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd15e60280 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:42.285000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:42.287617 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:42.294359 systemd-logind[1586]: New session 22 of user core. Dec 15 09:02:42.298425 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 15 09:02:42.301000 audit[5236]: USER_START pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.303000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.487090 sshd[5240]: Connection closed by 10.0.0.1 port 41094 Dec 15 09:02:42.488511 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:42.493000 audit[5236]: USER_END pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.493000 audit[5236]: CRED_DISP pid=5236 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.128:22-10.0.0.1:41100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:42.500385 systemd[1]: Started sshd@21-10.0.0.128:22-10.0.0.1:41100.service - OpenSSH per-connection server daemon (10.0.0.1:41100). Dec 15 09:02:42.501199 systemd[1]: sshd@20-10.0.0.128:22-10.0.0.1:41094.service: Deactivated successfully. Dec 15 09:02:42.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.128:22-10.0.0.1:41094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:42.507644 systemd[1]: session-22.scope: Deactivated successfully. Dec 15 09:02:42.510740 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Dec 15 09:02:42.513189 systemd-logind[1586]: Removed session 22. Dec 15 09:02:42.564000 audit[5248]: USER_ACCT pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.565346 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 41100 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:42.566000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.566000 audit[5248]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8b022200 a2=3 a3=0 items=0 ppid=1 pid=5248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:42.566000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:42.569012 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:42.577157 systemd-logind[1586]: New session 23 of user core. Dec 15 09:02:42.590132 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 15 09:02:42.593000 audit[5248]: USER_START pid=5248 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.595000 audit[5255]: CRED_ACQ pid=5255 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.679846 sshd[5255]: Connection closed by 10.0.0.1 port 41100 Dec 15 09:02:42.678571 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:42.679000 audit[5248]: USER_END pid=5248 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.679000 audit[5248]: CRED_DISP pid=5248 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:42.685825 systemd[1]: sshd@21-10.0.0.128:22-10.0.0.1:41100.service: Deactivated successfully. Dec 15 09:02:42.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.128:22-10.0.0.1:41100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:42.687998 systemd[1]: session-23.scope: Deactivated successfully. Dec 15 09:02:42.689004 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Dec 15 09:02:42.690149 systemd-logind[1586]: Removed session 23. Dec 15 09:02:43.720410 containerd[1611]: time="2025-12-15T09:02:43.720351859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 15 09:02:44.084162 containerd[1611]: time="2025-12-15T09:02:44.084081156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:44.161282 containerd[1611]: time="2025-12-15T09:02:44.161172742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 15 09:02:44.161457 containerd[1611]: time="2025-12-15T09:02:44.161301649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:44.162093 kubelet[2779]: E1215 09:02:44.161626 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 15 09:02:44.162093 kubelet[2779]: E1215 09:02:44.161701 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 15 09:02:44.162093 kubelet[2779]: E1215 09:02:44.161881 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfccg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qg84l_calico-system(47f37f66-a36c-44b9-8447-de6c1cff5809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:44.164224 containerd[1611]: time="2025-12-15T09:02:44.164150179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 15 09:02:44.535222 containerd[1611]: time="2025-12-15T09:02:44.535154753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:44.536283 containerd[1611]: time="2025-12-15T09:02:44.536232935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 15 09:02:44.536283 containerd[1611]: time="2025-12-15T09:02:44.536311566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:44.536510 kubelet[2779]: E1215 09:02:44.536468 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 15 09:02:44.536585 kubelet[2779]: E1215 09:02:44.536518 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 15 09:02:44.536719 kubelet[2779]: E1215 09:02:44.536650 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfccg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qg84l_calico-system(47f37f66-a36c-44b9-8447-de6c1cff5809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:44.537894 kubelet[2779]: E1215 09:02:44.537845 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:02:45.722577 containerd[1611]: time="2025-12-15T09:02:45.722449252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 15 09:02:46.081753 containerd[1611]: time="2025-12-15T09:02:46.081697952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:46.136978 containerd[1611]: time="2025-12-15T09:02:46.136906252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 15 09:02:46.137181 containerd[1611]: time="2025-12-15T09:02:46.136957971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:46.137254 kubelet[2779]: E1215 09:02:46.137173 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 15 09:02:46.137829 kubelet[2779]: E1215 09:02:46.137253 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 15 09:02:46.137829 kubelet[2779]: E1215 09:02:46.137410 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5a068dc8facc4dbe9f1ecf18e1d1e8f5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77fb68b4b8-6g7q8_calico-system(96d2f283-85f5-46e8-a0e8-3d26c4d28535): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:46.139690 containerd[1611]: time="2025-12-15T09:02:46.139651179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 15 09:02:46.576428 containerd[1611]: time="2025-12-15T09:02:46.576383135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:46.577725 containerd[1611]: time="2025-12-15T09:02:46.577665948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 15 09:02:46.577964 containerd[1611]: time="2025-12-15T09:02:46.577751611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:46.578064 kubelet[2779]: E1215 09:02:46.577931 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 15 09:02:46.578064 kubelet[2779]: E1215 09:02:46.577983 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 15 09:02:46.578197 kubelet[2779]: E1215 09:02:46.578141 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77fb68b4b8-6g7q8_calico-system(96d2f283-85f5-46e8-a0e8-3d26c4d28535): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:46.579523 kubelet[2779]: E1215 09:02:46.579466 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77fb68b4b8-6g7q8" podUID="96d2f283-85f5-46e8-a0e8-3d26c4d28535" Dec 15 09:02:46.721183 containerd[1611]: time="2025-12-15T09:02:46.720893711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 15 09:02:46.821000 audit[5277]: NETFILTER_CFG table=filter:136 family=2 entries=26 op=nft_register_rule pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:46.825844 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 15 09:02:46.826007 kernel: audit: type=1325 audit(1765789366.821:872): table=filter:136 family=2 entries=26 op=nft_register_rule pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:46.821000 audit[5277]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce97e54c0 a2=0 a3=7ffce97e54ac items=0 ppid=2893 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:46.833197 kernel: audit: type=1300 audit(1765789366.821:872): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce97e54c0 a2=0 a3=7ffce97e54ac items=0 ppid=2893 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:46.833250 kernel: audit: type=1327 audit(1765789366.821:872): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:46.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:46.840000 audit[5277]: NETFILTER_CFG table=nat:137 family=2 entries=104 op=nft_register_chain pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:46.840000 audit[5277]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffce97e54c0 a2=0 a3=7ffce97e54ac items=0 ppid=2893 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:46.852601 kernel: audit: type=1325 audit(1765789366.840:873): table=nat:137 family=2 entries=104 op=nft_register_chain pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 15 09:02:46.852730 kernel: audit: type=1300 audit(1765789366.840:873): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffce97e54c0 a2=0 a3=7ffce97e54ac items=0 ppid=2893 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:46.852767 kernel: audit: type=1327 audit(1765789366.840:873): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:46.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 15 09:02:47.056077 containerd[1611]: time="2025-12-15T09:02:47.056026331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:47.057098 containerd[1611]: time="2025-12-15T09:02:47.057063032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 15 09:02:47.057163 containerd[1611]: time="2025-12-15T09:02:47.057120922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:47.057320 kubelet[2779]: E1215 09:02:47.057277 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:47.057363 kubelet[2779]: E1215 09:02:47.057336 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 15 09:02:47.057511 kubelet[2779]: E1215 09:02:47.057475 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggr54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5756d6c8cc-j8csg_calico-apiserver(8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:47.058705 kubelet[2779]: E1215 09:02:47.058653 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" podUID="8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd" Dec 15 09:02:47.697040 systemd[1]: Started sshd@22-10.0.0.128:22-10.0.0.1:41114.service - OpenSSH per-connection server daemon (10.0.0.1:41114). Dec 15 09:02:47.701893 kernel: audit: type=1130 audit(1765789367.696:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.128:22-10.0.0.1:41114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:47.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.128:22-10.0.0.1:41114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:47.768000 audit[5279]: USER_ACCT pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:47.774484 sshd[5279]: Accepted publickey for core from 10.0.0.1 port 41114 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:47.775277 kernel: audit: type=1101 audit(1765789367.768:875): pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:47.774000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:47.776884 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:47.783210 kernel: audit: type=1103 audit(1765789367.774:876): pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:47.783259 kernel: audit: type=1006 audit(1765789367.774:877): pid=5279 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 15 09:02:47.784322 systemd-logind[1586]: New session 24 of user core. Dec 15 09:02:47.774000 audit[5279]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5ce31890 a2=3 a3=0 items=0 ppid=1 pid=5279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:47.774000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:47.794046 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 15 09:02:47.796000 audit[5279]: USER_START pid=5279 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:47.799000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:47.873299 sshd[5283]: Connection closed by 10.0.0.1 port 41114 Dec 15 09:02:47.873593 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:47.874000 audit[5279]: USER_END pid=5279 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:47.874000 audit[5279]: CRED_DISP pid=5279 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:47.879483 systemd[1]: sshd@22-10.0.0.128:22-10.0.0.1:41114.service: Deactivated successfully. Dec 15 09:02:47.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.128:22-10.0.0.1:41114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:47.881902 systemd[1]: session-24.scope: Deactivated successfully. Dec 15 09:02:47.882864 systemd-logind[1586]: Session 24 logged out. Waiting for processes to exit. Dec 15 09:02:47.884336 systemd-logind[1586]: Removed session 24. Dec 15 09:02:48.724277 containerd[1611]: time="2025-12-15T09:02:48.723999763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 15 09:02:49.057070 containerd[1611]: time="2025-12-15T09:02:49.057016255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:49.058592 containerd[1611]: time="2025-12-15T09:02:49.058404936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 15 09:02:49.058592 containerd[1611]: time="2025-12-15T09:02:49.058562196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:49.059108 kubelet[2779]: E1215 09:02:49.059036 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 15 09:02:49.059997 kubelet[2779]: E1215 09:02:49.059536 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 15 09:02:49.059997 kubelet[2779]: E1215 09:02:49.059771 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6mdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f9f87d6-hpvt5_calico-system(5b0886e8-4f46-439e-a9a8-bea45b864b25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:49.061968 kubelet[2779]: E1215 09:02:49.061917 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f9f87d6-hpvt5" podUID="5b0886e8-4f46-439e-a9a8-bea45b864b25" Dec 15 09:02:49.719714 containerd[1611]: time="2025-12-15T09:02:49.719675334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 15 09:02:50.043516 containerd[1611]: time="2025-12-15T09:02:50.043450243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 15 09:02:50.044620 containerd[1611]: time="2025-12-15T09:02:50.044584879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 15 09:02:50.044710 containerd[1611]: time="2025-12-15T09:02:50.044654162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 15 09:02:50.044879 kubelet[2779]: E1215 09:02:50.044831 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 15 09:02:50.044879 kubelet[2779]: E1215 09:02:50.044886 2779 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 15 09:02:50.045142 kubelet[2779]: E1215 09:02:50.045025 2779 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl4g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9jfj_calico-system(1c5ab256-136a-4105-b959-3f278aa6f144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 15 09:02:50.046196 kubelet[2779]: E1215 09:02:50.046144 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9jfj" podUID="1c5ab256-136a-4105-b959-3f278aa6f144" Dec 15 09:02:50.721703 kubelet[2779]: E1215 09:02:50.721662 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:52.720443 kubelet[2779]: E1215 09:02:52.720368 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-7vpbv" podUID="4740f6ee-54b4-4ea9-b846-2cfc949dbd68" Dec 15 09:02:52.885589 systemd[1]: Started sshd@23-10.0.0.128:22-10.0.0.1:49404.service - OpenSSH per-connection server daemon (10.0.0.1:49404). Dec 15 09:02:52.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.128:22-10.0.0.1:49404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:52.886939 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 15 09:02:52.886999 kernel: audit: type=1130 audit(1765789372.884:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.128:22-10.0.0.1:49404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:52.956000 audit[5297]: USER_ACCT pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:52.957392 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 49404 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:52.959772 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:52.956000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:52.964218 systemd-logind[1586]: New session 25 of user core. Dec 15 09:02:52.966761 kernel: audit: type=1101 audit(1765789372.956:884): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:52.966828 kernel: audit: type=1103 audit(1765789372.956:885): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:52.966850 kernel: audit: type=1006 audit(1765789372.956:886): pid=5297 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 15 09:02:52.956000 audit[5297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd813f5680 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:52.974761 kernel: audit: type=1300 audit(1765789372.956:886): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd813f5680 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:52.974975 kernel: audit: type=1327 audit(1765789372.956:886): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:52.956000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:52.979045 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 15 09:02:52.980000 audit[5297]: USER_START pid=5297 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:52.980000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:52.991635 kernel: audit: type=1105 audit(1765789372.980:887): pid=5297 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:52.991692 kernel: audit: type=1103 audit(1765789372.980:888): pid=5301 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:53.067086 sshd[5301]: Connection closed by 10.0.0.1 port 49404 Dec 15 09:02:53.068023 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:53.068000 audit[5297]: USER_END pid=5297 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:53.074369 systemd[1]: sshd@23-10.0.0.128:22-10.0.0.1:49404.service: Deactivated successfully. Dec 15 09:02:53.068000 audit[5297]: CRED_DISP pid=5297 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:53.076696 systemd[1]: session-25.scope: Deactivated successfully. Dec 15 09:02:53.079073 systemd-logind[1586]: Session 25 logged out. Waiting for processes to exit. Dec 15 09:02:53.080592 kernel: audit: type=1106 audit(1765789373.068:889): pid=5297 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:53.080698 kernel: audit: type=1104 audit(1765789373.068:890): pid=5297 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:53.080753 systemd-logind[1586]: Removed session 25. Dec 15 09:02:53.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.128:22-10.0.0.1:49404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:53.719781 kubelet[2779]: E1215 09:02:53.719712 2779 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 15 09:02:57.721251 kubelet[2779]: E1215 09:02:57.721197 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77fb68b4b8-6g7q8" podUID="96d2f283-85f5-46e8-a0e8-3d26c4d28535" Dec 15 09:02:57.721251 kubelet[2779]: E1215 09:02:57.721226 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qg84l" podUID="47f37f66-a36c-44b9-8447-de6c1cff5809" Dec 15 09:02:58.086037 systemd[1]: Started sshd@24-10.0.0.128:22-10.0.0.1:49420.service - OpenSSH per-connection server daemon (10.0.0.1:49420). Dec 15 09:02:58.091822 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 15 09:02:58.091868 kernel: audit: type=1130 audit(1765789378.085:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.128:22-10.0.0.1:49420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:58.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.128:22-10.0.0.1:49420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:58.167000 audit[5319]: USER_ACCT pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.169612 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 49420 ssh2: RSA SHA256:GUjAWapGj9d/2j5UfkLAdqMZzpWgH4czQmiEj3nqh/k Dec 15 09:02:58.171083 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 15 09:02:58.168000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.176872 systemd-logind[1586]: New session 26 of user core. Dec 15 09:02:58.177841 kernel: audit: type=1101 audit(1765789378.167:893): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.177887 kernel: audit: type=1103 audit(1765789378.168:894): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.177906 kernel: audit: type=1006 audit(1765789378.168:895): pid=5319 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 15 09:02:58.185262 kernel: audit: type=1300 audit(1765789378.168:895): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe91e4f0e0 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:58.168000 audit[5319]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe91e4f0e0 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 15 09:02:58.168000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:58.187377 kernel: audit: type=1327 audit(1765789378.168:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 15 09:02:58.189153 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 15 09:02:58.192000 audit[5319]: USER_START pid=5319 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.192000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.203829 kernel: audit: type=1105 audit(1765789378.192:896): pid=5319 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.203867 kernel: audit: type=1103 audit(1765789378.192:897): pid=5323 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.295830 sshd[5323]: Connection closed by 10.0.0.1 port 49420 Dec 15 09:02:58.298002 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Dec 15 09:02:58.304000 audit[5319]: USER_END pid=5319 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.310696 systemd-logind[1586]: Session 26 logged out. Waiting for processes to exit. Dec 15 09:02:58.311877 kernel: audit: type=1106 audit(1765789378.304:898): pid=5319 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.311693 systemd[1]: sshd@24-10.0.0.128:22-10.0.0.1:49420.service: Deactivated successfully. Dec 15 09:02:58.313780 systemd[1]: session-26.scope: Deactivated successfully. Dec 15 09:02:58.305000 audit[5319]: CRED_DISP pid=5319 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.320879 kernel: audit: type=1104 audit(1765789378.305:899): pid=5319 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 15 09:02:58.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.128:22-10.0.0.1:49420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 15 09:02:58.323235 systemd-logind[1586]: Removed session 26. Dec 15 09:02:59.721758 kubelet[2779]: E1215 09:02:59.721352 2779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756d6c8cc-j8csg" podUID="8526aa49-e2fa-4e0c-9a1a-c5b8a64482bd"