Dec 13 00:21:54.577939 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 20:55:10 -00 2025 Dec 13 00:21:54.577991 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:21:54.578008 kernel: BIOS-provided physical RAM map: Dec 13 00:21:54.578016 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 00:21:54.578023 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 00:21:54.578030 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 00:21:54.578039 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 00:21:54.578046 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 00:21:54.578056 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 00:21:54.578063 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 00:21:54.578075 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 00:21:54.578082 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 00:21:54.578090 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 00:21:54.578097 kernel: NX (Execute Disable) protection: active Dec 13 00:21:54.578106 kernel: APIC: Static calls initialized Dec 13 00:21:54.578119 kernel: SMBIOS 2.8 present. Dec 13 00:21:54.578132 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 00:21:54.578142 kernel: DMI: Memory slots populated: 1/1 Dec 13 00:21:54.578155 kernel: Hypervisor detected: KVM Dec 13 00:21:54.578165 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 00:21:54.578175 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 00:21:54.578185 kernel: kvm-clock: using sched offset of 4552113266 cycles Dec 13 00:21:54.578196 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 00:21:54.578207 kernel: tsc: Detected 2794.748 MHz processor Dec 13 00:21:54.578226 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 00:21:54.578257 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 00:21:54.578268 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 00:21:54.578280 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 00:21:54.578291 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 00:21:54.578310 kernel: Using GB pages for direct mapping Dec 13 00:21:54.578322 kernel: ACPI: Early table checksum verification disabled Dec 13 00:21:54.578341 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 00:21:54.578352 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:21:54.578364 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:21:54.578375 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:21:54.578387 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 00:21:54.578397 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:21:54.578409 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:21:54.578427 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:21:54.578439 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:21:54.578459 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Dec 13 00:21:54.578470 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Dec 13 00:21:54.578482 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 00:21:54.578493 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Dec 13 00:21:54.578512 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Dec 13 00:21:54.578524 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Dec 13 00:21:54.578535 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Dec 13 00:21:54.578547 kernel: No NUMA configuration found Dec 13 00:21:54.578559 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 00:21:54.578571 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Dec 13 00:21:54.578589 kernel: Zone ranges: Dec 13 00:21:54.578600 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 00:21:54.578612 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 00:21:54.578623 kernel: Normal empty Dec 13 00:21:54.578634 kernel: Device empty Dec 13 00:21:54.578645 kernel: Movable zone start for each node Dec 13 00:21:54.578655 kernel: Early memory node ranges Dec 13 00:21:54.578673 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 00:21:54.578683 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 00:21:54.578694 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 00:21:54.578706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 00:21:54.578720 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 00:21:54.578731 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 00:21:54.578745 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 00:21:54.578756 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 00:21:54.578773 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 00:21:54.578784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 00:21:54.578798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 00:21:54.578810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 00:21:54.578822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 00:21:54.578833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 00:21:54.578844 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 00:21:54.578868 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 00:21:54.578879 kernel: TSC deadline timer available Dec 13 00:21:54.578890 kernel: CPU topo: Max. logical packages: 1 Dec 13 00:21:54.578899 kernel: CPU topo: Max. logical dies: 1 Dec 13 00:21:54.578910 kernel: CPU topo: Max. dies per package: 1 Dec 13 00:21:54.578919 kernel: CPU topo: Max. threads per core: 1 Dec 13 00:21:54.578929 kernel: CPU topo: Num. cores per package: 4 Dec 13 00:21:54.578940 kernel: CPU topo: Num. threads per package: 4 Dec 13 00:21:54.578991 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 13 00:21:54.579004 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 00:21:54.579016 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 00:21:54.579026 kernel: kvm-guest: setup PV sched yield Dec 13 00:21:54.579037 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 00:21:54.579047 kernel: Booting paravirtualized kernel on KVM Dec 13 00:21:54.579058 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 00:21:54.579078 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 00:21:54.579089 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 13 00:21:54.579099 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 13 00:21:54.579110 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 00:21:54.579120 kernel: kvm-guest: PV spinlocks enabled Dec 13 00:21:54.579130 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 00:21:54.579141 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:21:54.579157 kernel: random: crng init done Dec 13 00:21:54.579168 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 00:21:54.579178 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 00:21:54.579194 kernel: Fallback order for Node 0: 0 Dec 13 00:21:54.579208 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Dec 13 00:21:54.579222 kernel: Policy zone: DMA32 Dec 13 00:21:54.579260 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 00:21:54.579292 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 00:21:54.579320 kernel: ftrace: allocating 40103 entries in 157 pages Dec 13 00:21:54.579348 kernel: ftrace: allocated 157 pages with 5 groups Dec 13 00:21:54.579371 kernel: Dynamic Preempt: voluntary Dec 13 00:21:54.579394 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 00:21:54.579423 kernel: rcu: RCU event tracing is enabled. Dec 13 00:21:54.579445 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 00:21:54.579485 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 00:21:54.579509 kernel: Rude variant of Tasks RCU enabled. Dec 13 00:21:54.579537 kernel: Tracing variant of Tasks RCU enabled. Dec 13 00:21:54.579560 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 00:21:54.579585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 00:21:54.579613 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:21:54.579640 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:21:54.579668 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:21:54.579704 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 00:21:54.579728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 00:21:54.579761 kernel: Console: colour VGA+ 80x25 Dec 13 00:21:54.579792 kernel: printk: legacy console [ttyS0] enabled Dec 13 00:21:54.579819 kernel: ACPI: Core revision 20240827 Dec 13 00:21:54.579850 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 00:21:54.579874 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 00:21:54.579886 kernel: x2apic enabled Dec 13 00:21:54.579899 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 00:21:54.579920 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 00:21:54.579933 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 00:21:54.579949 kernel: kvm-guest: setup PV IPIs Dec 13 00:21:54.579977 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 00:21:54.580013 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:21:54.580042 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 00:21:54.580071 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 00:21:54.580099 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 00:21:54.580128 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 00:21:54.580161 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 00:21:54.580185 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 00:21:54.580223 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 13 00:21:54.580277 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 00:21:54.580299 kernel: active return thunk: retbleed_return_thunk Dec 13 00:21:54.580322 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 00:21:54.580335 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 00:21:54.580347 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 00:21:54.580360 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 00:21:54.580384 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 00:21:54.580396 kernel: active return thunk: srso_return_thunk Dec 13 00:21:54.580408 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 00:21:54.580421 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 00:21:54.580432 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 00:21:54.580441 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 00:21:54.580457 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 00:21:54.580469 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 00:21:54.580481 kernel: Freeing SMP alternatives memory: 32K Dec 13 00:21:54.580493 kernel: pid_max: default: 32768 minimum: 301 Dec 13 00:21:54.580506 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 13 00:21:54.580518 kernel: landlock: Up and running. Dec 13 00:21:54.580529 kernel: SELinux: Initializing. Dec 13 00:21:54.580551 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:21:54.580564 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:21:54.580576 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 00:21:54.580588 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 00:21:54.580602 kernel: ... version: 0 Dec 13 00:21:54.580619 kernel: ... bit width: 48 Dec 13 00:21:54.580632 kernel: ... generic registers: 6 Dec 13 00:21:54.580641 kernel: ... value mask: 0000ffffffffffff Dec 13 00:21:54.580664 kernel: ... max period: 00007fffffffffff Dec 13 00:21:54.580676 kernel: ... fixed-purpose events: 0 Dec 13 00:21:54.580688 kernel: ... event mask: 000000000000003f Dec 13 00:21:54.580701 kernel: signal: max sigframe size: 1776 Dec 13 00:21:54.580713 kernel: rcu: Hierarchical SRCU implementation. Dec 13 00:21:54.580726 kernel: rcu: Max phase no-delay instances is 400. Dec 13 00:21:54.580739 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 13 00:21:54.580759 kernel: smp: Bringing up secondary CPUs ... Dec 13 00:21:54.580771 kernel: smpboot: x86: Booting SMP configuration: Dec 13 00:21:54.580784 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 00:21:54.580796 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 00:21:54.580809 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 00:21:54.580822 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15596K init, 2444K bss, 120524K reserved, 0K cma-reserved) Dec 13 00:21:54.580835 kernel: devtmpfs: initialized Dec 13 00:21:54.580853 kernel: x86/mm: Memory block size: 128MB Dec 13 00:21:54.580865 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 00:21:54.580877 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 00:21:54.580886 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 00:21:54.580895 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 00:21:54.580903 kernel: audit: initializing netlink subsys (disabled) Dec 13 00:21:54.580913 kernel: audit: type=2000 audit(1765585309.948:1): state=initialized audit_enabled=0 res=1 Dec 13 00:21:54.580927 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 00:21:54.580937 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 00:21:54.580947 kernel: cpuidle: using governor menu Dec 13 00:21:54.580957 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 00:21:54.580967 kernel: dca service started, version 1.12.1 Dec 13 00:21:54.580976 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 13 00:21:54.580985 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 00:21:54.580998 kernel: PCI: Using configuration type 1 for base access Dec 13 00:21:54.581007 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 00:21:54.581016 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 00:21:54.581025 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 00:21:54.581034 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 00:21:54.581042 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 00:21:54.581051 kernel: ACPI: Added _OSI(Module Device) Dec 13 00:21:54.581064 kernel: ACPI: Added _OSI(Processor Device) Dec 13 00:21:54.581073 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 00:21:54.581082 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 00:21:54.581094 kernel: ACPI: Interpreter enabled Dec 13 00:21:54.581105 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 00:21:54.581117 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 00:21:54.581130 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 00:21:54.581142 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 00:21:54.581161 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 00:21:54.581173 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 00:21:54.581562 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 00:21:54.581799 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 00:21:54.582025 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 00:21:54.582053 kernel: PCI host bridge to bus 0000:00 Dec 13 00:21:54.582300 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 00:21:54.582515 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 00:21:54.582704 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 00:21:54.582914 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 00:21:54.583124 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 00:21:54.583374 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 00:21:54.583611 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 00:21:54.583853 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 13 00:21:54.584087 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 13 00:21:54.584340 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 13 00:21:54.584569 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 13 00:21:54.584789 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 13 00:21:54.585010 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 00:21:54.585279 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 13 00:21:54.585527 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Dec 13 00:21:54.585757 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 13 00:21:54.585998 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 00:21:54.586257 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 13 00:21:54.586501 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Dec 13 00:21:54.586727 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 13 00:21:54.586954 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 00:21:54.587196 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 13 00:21:54.587451 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Dec 13 00:21:54.587640 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Dec 13 00:21:54.587841 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 00:21:54.588065 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 13 00:21:54.588464 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 13 00:21:54.588893 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 00:21:54.589192 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 13 00:21:54.589458 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Dec 13 00:21:54.589689 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Dec 13 00:21:54.589970 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 13 00:21:54.590202 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 13 00:21:54.590410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 00:21:54.590435 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 00:21:54.590446 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 00:21:54.590457 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 00:21:54.590468 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 00:21:54.590478 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 00:21:54.590489 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 00:21:54.590511 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 00:21:54.590524 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 00:21:54.590536 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 00:21:54.590549 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 00:21:54.590562 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 00:21:54.590574 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 00:21:54.590587 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 00:21:54.590607 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 00:21:54.590619 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 00:21:54.590632 kernel: iommu: Default domain type: Translated Dec 13 00:21:54.590645 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 00:21:54.590657 kernel: PCI: Using ACPI for IRQ routing Dec 13 00:21:54.590670 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 00:21:54.590682 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 00:21:54.590702 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 00:21:54.590948 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 00:21:54.591174 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 00:21:54.591446 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 00:21:54.591466 kernel: vgaarb: loaded Dec 13 00:21:54.591479 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 00:21:54.591492 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 00:21:54.591515 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 00:21:54.591528 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 00:21:54.591541 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 00:21:54.591553 kernel: pnp: PnP ACPI init Dec 13 00:21:54.591803 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 00:21:54.591823 kernel: pnp: PnP ACPI: found 6 devices Dec 13 00:21:54.591845 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 00:21:54.591865 kernel: NET: Registered PF_INET protocol family Dec 13 00:21:54.591889 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 00:21:54.591902 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 00:21:54.591916 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 00:21:54.591928 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 00:21:54.591940 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 00:21:54.591961 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 00:21:54.591974 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:21:54.591987 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:21:54.592000 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 00:21:54.592011 kernel: NET: Registered PF_XDP protocol family Dec 13 00:21:54.592259 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 00:21:54.592484 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 00:21:54.592699 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 00:21:54.592908 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 00:21:54.593149 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 00:21:54.596484 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 00:21:54.596514 kernel: PCI: CLS 0 bytes, default 64 Dec 13 00:21:54.596528 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:21:54.596541 kernel: Initialise system trusted keyrings Dec 13 00:21:54.596562 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 00:21:54.596575 kernel: Key type asymmetric registered Dec 13 00:21:54.596588 kernel: Asymmetric key parser 'x509' registered Dec 13 00:21:54.596602 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 00:21:54.596615 kernel: io scheduler mq-deadline registered Dec 13 00:21:54.596628 kernel: io scheduler kyber registered Dec 13 00:21:54.596640 kernel: io scheduler bfq registered Dec 13 00:21:54.596656 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 00:21:54.596669 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 00:21:54.596681 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 00:21:54.596694 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 00:21:54.596706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 00:21:54.596718 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 00:21:54.596731 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 00:21:54.596746 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 00:21:54.596758 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 00:21:54.597003 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 00:21:54.597022 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 00:21:54.597234 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 00:21:54.597516 kernel: rtc_cmos 00:04: setting system clock to 2025-12-13T00:21:52 UTC (1765585312) Dec 13 00:21:54.597741 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 00:21:54.597759 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 00:21:54.597772 kernel: NET: Registered PF_INET6 protocol family Dec 13 00:21:54.597783 kernel: Segment Routing with IPv6 Dec 13 00:21:54.597800 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 00:21:54.597814 kernel: NET: Registered PF_PACKET protocol family Dec 13 00:21:54.597827 kernel: Key type dns_resolver registered Dec 13 00:21:54.597845 kernel: IPI shorthand broadcast: enabled Dec 13 00:21:54.597858 kernel: sched_clock: Marking stable (2205002837, 249339583)->(2539129735, -84787315) Dec 13 00:21:54.597870 kernel: registered taskstats version 1 Dec 13 00:21:54.597883 kernel: Loading compiled-in X.509 certificates Dec 13 00:21:54.597897 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 199a9f6885410acbf0a1b178e5562253352ca03c' Dec 13 00:21:54.597909 kernel: Demotion targets for Node 0: null Dec 13 00:21:54.597922 kernel: Key type .fscrypt registered Dec 13 00:21:54.597935 kernel: Key type fscrypt-provisioning registered Dec 13 00:21:54.597957 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 00:21:54.597970 kernel: ima: Allocated hash algorithm: sha1 Dec 13 00:21:54.597983 kernel: ima: No architecture policies found Dec 13 00:21:54.597995 kernel: clk: Disabling unused clocks Dec 13 00:21:54.598007 kernel: Freeing unused kernel image (initmem) memory: 15596K Dec 13 00:21:54.598019 kernel: Write protecting the kernel read-only data: 47104k Dec 13 00:21:54.598031 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 13 00:21:54.598050 kernel: Run /init as init process Dec 13 00:21:54.598061 kernel: with arguments: Dec 13 00:21:54.598073 kernel: /init Dec 13 00:21:54.598085 kernel: with environment: Dec 13 00:21:54.598097 kernel: HOME=/ Dec 13 00:21:54.598108 kernel: TERM=linux Dec 13 00:21:54.598120 kernel: SCSI subsystem initialized Dec 13 00:21:54.598139 kernel: libata version 3.00 loaded. Dec 13 00:21:54.598440 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 00:21:54.598515 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 00:21:54.598756 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 13 00:21:54.598960 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 13 00:21:54.599145 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 00:21:54.599425 kernel: scsi host0: ahci Dec 13 00:21:54.599643 kernel: scsi host1: ahci Dec 13 00:21:54.599886 kernel: scsi host2: ahci Dec 13 00:21:54.600137 kernel: scsi host3: ahci Dec 13 00:21:54.600419 kernel: scsi host4: ahci Dec 13 00:21:54.600681 kernel: scsi host5: ahci Dec 13 00:21:54.600702 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Dec 13 00:21:54.600717 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Dec 13 00:21:54.600730 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Dec 13 00:21:54.600743 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Dec 13 00:21:54.600757 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Dec 13 00:21:54.600770 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Dec 13 00:21:54.600792 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 00:21:54.600805 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 00:21:54.600818 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 00:21:54.600831 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 00:21:54.600845 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 00:21:54.600858 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 00:21:54.600871 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:21:54.600891 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 00:21:54.600905 kernel: ata3.00: applying bridge limits Dec 13 00:21:54.600918 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:21:54.600933 kernel: ata3.00: configured for UDMA/100 Dec 13 00:21:54.601275 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 00:21:54.601543 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 00:21:54.601784 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 13 00:21:54.601803 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 00:21:54.601817 kernel: GPT:16515071 != 27000831 Dec 13 00:21:54.601830 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 00:21:54.601843 kernel: GPT:16515071 != 27000831 Dec 13 00:21:54.601855 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 00:21:54.601868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 00:21:54.602136 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 00:21:54.602155 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 00:21:54.602510 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 00:21:54.602531 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 00:21:54.602545 kernel: device-mapper: uevent: version 1.0.3 Dec 13 00:21:54.602558 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 13 00:21:54.602583 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 13 00:21:54.602601 kernel: raid6: avx2x4 gen() 27092 MB/s Dec 13 00:21:54.602614 kernel: raid6: avx2x2 gen() 23704 MB/s Dec 13 00:21:54.602626 kernel: raid6: avx2x1 gen() 21199 MB/s Dec 13 00:21:54.602638 kernel: raid6: using algorithm avx2x4 gen() 27092 MB/s Dec 13 00:21:54.602658 kernel: raid6: .... xor() 6310 MB/s, rmw enabled Dec 13 00:21:54.602671 kernel: raid6: using avx2x2 recovery algorithm Dec 13 00:21:54.602685 kernel: xor: automatically using best checksumming function avx Dec 13 00:21:54.602698 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 00:21:54.602709 kernel: BTRFS: device fsid 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Dec 13 00:21:54.602720 kernel: BTRFS info (device dm-0): first mount of filesystem 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d Dec 13 00:21:54.602731 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:21:54.602747 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 00:21:54.602758 kernel: BTRFS info (device dm-0): enabling free space tree Dec 13 00:21:54.602769 kernel: loop: module loaded Dec 13 00:21:54.602780 kernel: loop0: detected capacity change from 0 to 100528 Dec 13 00:21:54.602791 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 00:21:54.602808 systemd[1]: Successfully made /usr/ read-only. Dec 13 00:21:54.602823 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:21:54.602839 systemd[1]: Detected virtualization kvm. Dec 13 00:21:54.602852 systemd[1]: Detected architecture x86-64. Dec 13 00:21:54.602868 systemd[1]: Running in initrd. Dec 13 00:21:54.602885 systemd[1]: No hostname configured, using default hostname. Dec 13 00:21:54.602903 systemd[1]: Hostname set to . Dec 13 00:21:54.602934 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:21:54.602951 systemd[1]: Queued start job for default target initrd.target. Dec 13 00:21:54.602968 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:21:54.602986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:21:54.603002 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:21:54.603017 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 00:21:54.603031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:21:54.603053 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 00:21:54.603067 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 00:21:54.603080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:21:54.603094 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:21:54.603109 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:21:54.603122 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:21:54.603142 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:21:54.603157 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:21:54.603170 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:21:54.603184 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:21:54.603198 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:21:54.603212 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:21:54.603225 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 00:21:54.603265 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 13 00:21:54.603279 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:21:54.603293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:21:54.603316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:21:54.603330 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:21:54.603344 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 00:21:54.603366 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 00:21:54.603380 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:21:54.603394 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 00:21:54.603408 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 13 00:21:54.603422 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 00:21:54.603435 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:21:54.603449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:21:54.603472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:21:54.603486 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 00:21:54.603500 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:21:54.603514 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 00:21:54.603536 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 00:21:54.603594 systemd-journald[315]: Collecting audit messages is enabled. Dec 13 00:21:54.603627 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:21:54.603649 kernel: audit: type=1130 audit(1765585314.569:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.603663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:21:54.603676 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 00:21:54.603689 kernel: Bridge firewalling registered Dec 13 00:21:54.603704 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:21:54.603718 kernel: audit: type=1130 audit(1765585314.597:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.603738 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:21:54.603753 systemd-journald[315]: Journal started Dec 13 00:21:54.603781 systemd-journald[315]: Runtime Journal (/run/log/journal/88ff58cce0924c20abe0bfed1244fb42) is 6M, max 48.2M, 42.1M free. Dec 13 00:21:54.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.591949 systemd-modules-load[318]: Inserted module 'br_netfilter' Dec 13 00:21:54.682807 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:21:54.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.687276 kernel: audit: type=1130 audit(1765585314.682:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.697525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:21:54.705873 kernel: audit: type=1130 audit(1765585314.697:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.706195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:21:54.716858 kernel: audit: type=1130 audit(1765585314.708:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.714170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 00:21:54.719403 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:21:54.736216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:21:54.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.742821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:21:54.748249 kernel: audit: type=1130 audit(1765585314.740:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.748282 kernel: audit: type=1334 audit(1765585314.741:8): prog-id=6 op=LOAD Dec 13 00:21:54.741000 audit: BPF prog-id=6 op=LOAD Dec 13 00:21:54.756147 systemd-tmpfiles[343]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 13 00:21:54.766219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:21:54.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.770043 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:21:54.781759 kernel: audit: type=1130 audit(1765585314.769:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.781797 kernel: audit: type=1130 audit(1765585314.773:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.781936 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 00:21:54.819087 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:21:54.840776 systemd-resolved[348]: Positive Trust Anchors: Dec 13 00:21:54.840794 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:21:54.840798 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:21:54.840830 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:21:54.875983 systemd-resolved[348]: Defaulting to hostname 'linux'. Dec 13 00:21:54.878705 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:21:54.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:54.879763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:21:55.025310 kernel: Loading iSCSI transport class v2.0-870. Dec 13 00:21:55.046281 kernel: iscsi: registered transport (tcp) Dec 13 00:21:55.120276 kernel: iscsi: registered transport (qla4xxx) Dec 13 00:21:55.120375 kernel: QLogic iSCSI HBA Driver Dec 13 00:21:55.152122 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:21:55.194793 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:21:55.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.204201 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:21:55.270092 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 00:21:55.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.289405 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 00:21:55.291326 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 00:21:55.345873 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:21:55.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.369000 audit: BPF prog-id=7 op=LOAD Dec 13 00:21:55.369000 audit: BPF prog-id=8 op=LOAD Dec 13 00:21:55.370552 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:21:55.404533 systemd-udevd[599]: Using default interface naming scheme 'v257'. Dec 13 00:21:55.421651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:21:55.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.436620 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 00:21:55.469988 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:21:55.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.475526 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 00:21:55.475564 kernel: audit: type=1130 audit(1765585315.474:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.476469 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:21:55.487165 kernel: audit: type=1334 audit(1765585315.475:19): prog-id=9 op=LOAD Dec 13 00:21:55.475000 audit: BPF prog-id=9 op=LOAD Dec 13 00:21:55.497393 dracut-pre-trigger[678]: rd.md=0: removing MD RAID activation Dec 13 00:21:55.554783 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:21:55.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.560209 systemd-networkd[702]: lo: Link UP Dec 13 00:21:55.563984 kernel: audit: type=1130 audit(1765585315.558:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.560223 systemd-networkd[702]: lo: Gained carrier Dec 13 00:21:55.561013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:21:55.567533 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:21:55.571767 systemd[1]: Reached target network.target - Network. Dec 13 00:21:55.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.581301 kernel: audit: type=1130 audit(1765585315.571:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.708216 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:21:55.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.718290 kernel: audit: type=1130 audit(1765585315.713:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.721479 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 00:21:55.784253 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 00:21:55.838259 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 00:21:55.838223 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 00:21:55.865684 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 00:21:55.881808 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:21:55.886124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:21:55.898516 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 13 00:21:55.898567 kernel: AES CTR mode by8 optimization enabled Dec 13 00:21:55.887579 systemd-networkd[702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:21:55.889863 systemd-networkd[702]: eth0: Link UP Dec 13 00:21:55.922700 kernel: audit: type=1131 audit(1765585315.909:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:55.890177 systemd-networkd[702]: eth0: Gained carrier Dec 13 00:21:55.890194 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:21:55.908334 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 00:21:55.909324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:21:55.909484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:21:55.909741 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:21:55.913532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:21:55.921858 systemd-networkd[702]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:21:55.985824 disk-uuid[835]: Primary Header is updated. Dec 13 00:21:55.985824 disk-uuid[835]: Secondary Entries is updated. Dec 13 00:21:55.985824 disk-uuid[835]: Secondary Header is updated. Dec 13 00:21:56.091045 kernel: audit: type=1130 audit(1765585316.071:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:56.091075 kernel: audit: type=1130 audit(1765585316.079:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:56.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:56.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:56.059021 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 00:21:56.079170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:21:56.090822 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:21:56.091697 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:21:56.104057 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:21:56.109462 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 00:21:56.146083 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:21:56.154718 kernel: audit: type=1130 audit(1765585316.146:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:56.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.066440 disk-uuid[838]: Warning: The kernel is still using the old partition table. Dec 13 00:21:57.066440 disk-uuid[838]: The new table will be used at the next reboot or after you Dec 13 00:21:57.066440 disk-uuid[838]: run partprobe(8) or kpartx(8) Dec 13 00:21:57.066440 disk-uuid[838]: The operation has completed successfully. Dec 13 00:21:57.084887 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 00:21:57.092965 kernel: audit: type=1130 audit(1765585317.084:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.085037 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 00:21:57.093346 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 00:21:57.134467 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Dec 13 00:21:57.134527 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:21:57.134540 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:21:57.139557 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:21:57.139584 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:21:57.147255 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:21:57.147973 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 00:21:57.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.150797 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 00:21:57.290960 ignition[881]: Ignition 2.24.0 Dec 13 00:21:57.290973 ignition[881]: Stage: fetch-offline Dec 13 00:21:57.291016 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:21:57.291029 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:21:57.291122 ignition[881]: parsed url from cmdline: "" Dec 13 00:21:57.291126 ignition[881]: no config URL provided Dec 13 00:21:57.291204 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 00:21:57.291216 ignition[881]: no config at "/usr/lib/ignition/user.ign" Dec 13 00:21:57.291296 ignition[881]: op(1): [started] loading QEMU firmware config module Dec 13 00:21:57.291301 ignition[881]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 00:21:57.300650 ignition[881]: op(1): [finished] loading QEMU firmware config module Dec 13 00:21:57.385509 ignition[881]: parsing config with SHA512: 230a3658da92043864806e2a388f74866ebe3893727137ff59051c643e771a68429257e980bb82cf721f54ddc08581b4e155cd75b234d9d8670a7f6e9800df13 Dec 13 00:21:57.390739 unknown[881]: fetched base config from "system" Dec 13 00:21:57.390753 unknown[881]: fetched user config from "qemu" Dec 13 00:21:57.391184 ignition[881]: fetch-offline: fetch-offline passed Dec 13 00:21:57.394590 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:21:57.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.391284 ignition[881]: Ignition finished successfully Dec 13 00:21:57.395154 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 00:21:57.396267 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 00:21:57.429096 ignition[891]: Ignition 2.24.0 Dec 13 00:21:57.429109 ignition[891]: Stage: kargs Dec 13 00:21:57.429325 ignition[891]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:21:57.429343 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:21:57.430366 ignition[891]: kargs: kargs passed Dec 13 00:21:57.435683 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 00:21:57.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.430418 ignition[891]: Ignition finished successfully Dec 13 00:21:57.439495 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 00:21:57.473681 ignition[898]: Ignition 2.24.0 Dec 13 00:21:57.473695 ignition[898]: Stage: disks Dec 13 00:21:57.473908 ignition[898]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:21:57.473924 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:21:57.475041 ignition[898]: disks: disks passed Dec 13 00:21:57.475109 ignition[898]: Ignition finished successfully Dec 13 00:21:57.480611 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 00:21:57.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.482898 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 00:21:57.485848 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 00:21:57.489184 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:21:57.493212 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:21:57.496814 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:21:57.501730 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 00:21:57.537942 systemd-fsck[907]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 13 00:21:57.546799 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 00:21:57.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:57.552679 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 00:21:57.624396 systemd-networkd[702]: eth0: Gained IPv6LL Dec 13 00:21:57.791266 kernel: EXT4-fs (vda9): mounted filesystem fc518408-2cc6-461e-9cc3-fcafcb4d05ba r/w with ordered data mode. Quota mode: none. Dec 13 00:21:57.791816 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 00:21:57.793526 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 00:21:57.853645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:21:57.858330 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 00:21:57.860086 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 00:21:57.860138 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 00:21:57.860175 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:21:57.870357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 00:21:57.872083 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 00:21:57.883716 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (915) Dec 13 00:21:57.883749 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:21:57.883765 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:21:57.887205 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:21:57.887287 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:21:57.888740 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:21:58.070899 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 00:21:58.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:58.073436 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 00:21:58.077946 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 00:21:58.107296 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:21:58.123324 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 00:21:58.125971 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 00:21:58.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:58.146702 ignition[1012]: INFO : Ignition 2.24.0 Dec 13 00:21:58.146702 ignition[1012]: INFO : Stage: mount Dec 13 00:21:58.149453 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:21:58.149453 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:21:58.153372 ignition[1012]: INFO : mount: mount passed Dec 13 00:21:58.154560 ignition[1012]: INFO : Ignition finished successfully Dec 13 00:21:58.158395 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 00:21:58.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:58.160257 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 00:21:58.187137 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:21:58.225846 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Dec 13 00:21:58.225933 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:21:58.225949 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:21:58.231947 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:21:58.232023 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:21:58.234178 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:21:58.276102 ignition[1041]: INFO : Ignition 2.24.0 Dec 13 00:21:58.276102 ignition[1041]: INFO : Stage: files Dec 13 00:21:58.279297 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:21:58.279297 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:21:58.279297 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Dec 13 00:21:58.285042 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 00:21:58.285042 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 00:21:58.292952 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 00:21:58.295652 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 00:21:58.298068 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 00:21:58.296316 unknown[1041]: wrote ssh authorized keys file for user: core Dec 13 00:21:58.302165 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 13 00:21:58.305445 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 13 00:21:58.361326 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 00:21:58.467546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 13 00:21:58.467546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:21:58.474819 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 13 00:21:58.506571 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 13 00:21:58.506571 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 13 00:21:58.506571 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 13 00:21:58.784393 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 00:21:59.562824 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 13 00:21:59.562824 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 00:21:59.569553 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:21:59.569553 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:21:59.569553 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 00:21:59.569553 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 00:21:59.569553 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:21:59.569553 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:21:59.569553 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 00:21:59.569553 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 00:21:59.600564 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:21:59.608851 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:21:59.611635 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 00:21:59.611635 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 00:21:59.611635 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 00:21:59.611635 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:21:59.611635 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:21:59.611635 ignition[1041]: INFO : files: files passed Dec 13 00:21:59.611635 ignition[1041]: INFO : Ignition finished successfully Dec 13 00:21:59.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.618379 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 00:21:59.621633 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 00:21:59.631222 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 00:21:59.648186 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 00:21:59.648352 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 00:21:59.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.657548 initrd-setup-root-after-ignition[1071]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 00:21:59.664537 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:21:59.667693 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:21:59.670561 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:21:59.674226 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:21:59.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.678878 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 00:21:59.681603 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 00:21:59.752155 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 00:21:59.752345 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 00:21:59.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.753848 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 00:21:59.758939 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 00:21:59.763731 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 00:21:59.765531 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 00:21:59.806429 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:21:59.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.808880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 00:21:59.834491 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:21:59.834701 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:21:59.835895 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:21:59.836733 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 00:21:59.846750 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 00:21:59.846960 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:21:59.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.852286 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 00:21:59.855448 systemd[1]: Stopped target basic.target - Basic System. Dec 13 00:21:59.856187 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 00:21:59.858914 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:21:59.862402 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 00:21:59.866066 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:21:59.866619 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 00:21:59.867152 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:21:59.867744 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 00:21:59.868259 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 00:21:59.884859 systemd[1]: Stopped target swap.target - Swaps. Dec 13 00:21:59.887459 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 00:21:59.887678 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:21:59.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.892366 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:21:59.893212 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:21:59.896706 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 00:21:59.899469 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:21:59.900634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 00:21:59.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.900767 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 00:21:59.908015 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 00:21:59.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.908142 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:21:59.908985 systemd[1]: Stopped target paths.target - Path Units. Dec 13 00:21:59.913748 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 00:21:59.919316 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:21:59.933688 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 00:21:59.937837 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 00:21:59.938667 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 00:21:59.938770 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:21:59.943280 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 00:21:59.943376 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:21:59.946033 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 13 00:21:59.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.946118 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:21:59.949021 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 00:21:59.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.949141 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:21:59.952017 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 00:21:59.952128 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 00:21:59.956712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 00:21:59.958599 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 00:21:59.958727 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:21:59.975526 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 00:21:59.976939 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 00:21:59.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.977098 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:21:59.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.988387 ignition[1097]: INFO : Ignition 2.24.0 Dec 13 00:21:59.988387 ignition[1097]: INFO : Stage: umount Dec 13 00:21:59.988387 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:21:59.988387 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:21:59.988387 ignition[1097]: INFO : umount: umount passed Dec 13 00:21:59.988387 ignition[1097]: INFO : Ignition finished successfully Dec 13 00:21:59.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.980721 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 00:22:00.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.980958 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:22:00.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.984415 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 00:22:00.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.984648 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:21:59.991281 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 00:21:59.991411 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 00:22:00.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:59.996820 systemd[1]: Stopped target network.target - Network. Dec 13 00:22:00.001754 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 00:22:00.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.001882 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 00:22:00.003645 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 00:22:00.032000 audit: BPF prog-id=6 op=UNLOAD Dec 13 00:22:00.003704 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 00:22:00.005715 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 00:22:00.005773 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 00:22:00.009750 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 00:22:00.009804 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 00:22:00.013263 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 00:22:00.014027 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 00:22:00.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.020286 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 00:22:00.020987 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 00:22:00.021102 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 00:22:00.025712 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 00:22:00.025843 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 00:22:00.041177 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 00:22:00.133000 audit: BPF prog-id=9 op=UNLOAD Dec 13 00:22:00.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.041353 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 00:22:00.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.082961 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 13 00:22:00.084679 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 00:22:00.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.084727 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:22:00.128765 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 00:22:00.131329 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 00:22:00.131439 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:22:00.135194 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 00:22:00.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.135286 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:22:00.138871 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 00:22:00.138927 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 00:22:00.142519 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:22:00.152979 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 00:22:00.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.153119 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 00:22:00.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.168420 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 00:22:00.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.168522 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 00:22:00.173583 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 00:22:00.173770 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:22:00.175122 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 00:22:00.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.175266 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 00:22:00.179216 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 00:22:00.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.179322 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 00:22:00.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.181124 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 00:22:00.181196 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:22:00.184594 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 00:22:00.184664 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:22:00.191749 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 00:22:00.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.191803 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 00:22:00.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.195417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 00:22:00.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.195471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:22:00.202985 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 00:22:00.205941 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 13 00:22:00.205999 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:22:00.206934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 00:22:00.206986 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:22:00.207782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:22:00.207830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:22:00.238021 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 00:22:00.238170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 00:22:00.242379 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 00:22:00.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:00.244147 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 00:22:00.289554 systemd[1]: Switching root. Dec 13 00:22:00.325659 systemd-journald[315]: Journal stopped Dec 13 00:22:02.959327 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Dec 13 00:22:02.959395 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 00:22:02.959411 kernel: SELinux: policy capability open_perms=1 Dec 13 00:22:02.959424 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 00:22:02.959480 kernel: SELinux: policy capability always_check_network=0 Dec 13 00:22:02.959497 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 00:22:02.959528 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 00:22:02.959552 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 00:22:02.959565 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 00:22:02.959583 kernel: SELinux: policy capability userspace_initial_context=0 Dec 13 00:22:02.959596 kernel: kauditd_printk_skb: 53 callbacks suppressed Dec 13 00:22:02.959625 kernel: audit: type=1403 audit(1765585321.551:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 00:22:02.959649 systemd[1]: Successfully loaded SELinux policy in 94.847ms. Dec 13 00:22:02.959677 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.178ms. Dec 13 00:22:02.959696 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:22:02.959720 systemd[1]: Detected virtualization kvm. Dec 13 00:22:02.959736 systemd[1]: Detected architecture x86-64. Dec 13 00:22:02.959753 systemd[1]: Detected first boot. Dec 13 00:22:02.959777 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:22:02.959791 kernel: audit: type=1334 audit(1765585321.758:82): prog-id=10 op=LOAD Dec 13 00:22:02.959803 kernel: audit: type=1334 audit(1765585321.758:83): prog-id=10 op=UNLOAD Dec 13 00:22:02.959816 kernel: audit: type=1334 audit(1765585321.758:84): prog-id=11 op=LOAD Dec 13 00:22:02.959829 kernel: audit: type=1334 audit(1765585321.758:85): prog-id=11 op=UNLOAD Dec 13 00:22:02.959841 zram_generator::config[1141]: No configuration found. Dec 13 00:22:02.959867 kernel: Guest personality initialized and is inactive Dec 13 00:22:02.959880 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 13 00:22:02.959894 kernel: Initialized host personality Dec 13 00:22:02.959913 kernel: NET: Registered PF_VSOCK protocol family Dec 13 00:22:02.959926 systemd[1]: Populated /etc with preset unit settings. Dec 13 00:22:02.959939 kernel: audit: type=1334 audit(1765585322.342:86): prog-id=12 op=LOAD Dec 13 00:22:02.959954 kernel: audit: type=1334 audit(1765585322.342:87): prog-id=3 op=UNLOAD Dec 13 00:22:02.959980 kernel: audit: type=1334 audit(1765585322.342:88): prog-id=13 op=LOAD Dec 13 00:22:02.959994 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 00:22:02.960007 kernel: audit: type=1334 audit(1765585322.342:89): prog-id=14 op=LOAD Dec 13 00:22:02.960020 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 00:22:02.960032 kernel: audit: type=1334 audit(1765585322.342:90): prog-id=4 op=UNLOAD Dec 13 00:22:02.960054 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 00:22:02.960072 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 00:22:02.960110 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 00:22:02.960128 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 00:22:02.960146 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 00:22:02.960160 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 00:22:02.960174 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 00:22:02.960191 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 00:22:02.960213 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 00:22:02.960230 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:22:02.960257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:22:02.960271 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 00:22:02.960295 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 00:22:02.960309 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 00:22:02.960325 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:22:02.960347 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 00:22:02.960364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:22:02.960377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:22:02.960391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 00:22:02.960404 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 00:22:02.960417 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 00:22:02.960431 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 00:22:02.960451 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:22:02.960467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:22:02.960480 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 13 00:22:02.960493 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:22:02.960510 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:22:02.960523 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 00:22:02.960537 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 00:22:02.960553 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 13 00:22:02.960575 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:22:02.960588 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 13 00:22:02.960603 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:22:02.960616 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 13 00:22:02.960633 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 13 00:22:02.960652 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:22:02.960665 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:22:02.960689 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 00:22:02.960706 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 00:22:02.960723 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 00:22:02.960739 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 00:22:02.960756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:22:02.960772 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 00:22:02.960788 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 00:22:02.960814 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 00:22:02.960836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 00:22:02.960858 systemd[1]: Reached target machines.target - Containers. Dec 13 00:22:02.960878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 00:22:02.960895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:22:02.960911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:22:02.960943 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 00:22:02.960963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:22:02.960977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:22:02.960990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:22:02.961003 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 00:22:02.961016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:22:02.961033 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 00:22:02.961056 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 00:22:02.961070 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 00:22:02.961095 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 00:22:02.961113 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 00:22:02.961140 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:22:02.961158 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:22:02.961174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:22:02.961188 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:22:02.961201 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 00:22:02.961215 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 13 00:22:02.961229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:22:02.961407 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:22:02.961424 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 00:22:02.961437 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 00:22:02.961450 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 00:22:02.961463 kernel: fuse: init (API version 7.41) Dec 13 00:22:02.961476 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 00:22:02.961492 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 00:22:02.961513 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 00:22:02.961546 kernel: ACPI: bus type drm_connector registered Dec 13 00:22:02.961559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:22:02.961573 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 00:22:02.961586 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 00:22:02.961623 systemd-journald[1204]: Collecting audit messages is enabled. Dec 13 00:22:02.961671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:22:02.961690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:22:02.961707 systemd-journald[1204]: Journal started Dec 13 00:22:02.961740 systemd-journald[1204]: Runtime Journal (/run/log/journal/88ff58cce0924c20abe0bfed1244fb42) is 6M, max 48.2M, 42.1M free. Dec 13 00:22:02.534000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 00:22:02.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.820000 audit: BPF prog-id=14 op=UNLOAD Dec 13 00:22:02.820000 audit: BPF prog-id=13 op=UNLOAD Dec 13 00:22:02.821000 audit: BPF prog-id=15 op=LOAD Dec 13 00:22:02.821000 audit: BPF prog-id=16 op=LOAD Dec 13 00:22:02.821000 audit: BPF prog-id=17 op=LOAD Dec 13 00:22:02.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.957000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 00:22:02.957000 audit[1204]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe99e02a10 a2=4000 a3=0 items=0 ppid=1 pid=1204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:02.957000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 00:22:02.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.318809 systemd[1]: Queued start job for default target multi-user.target. Dec 13 00:22:02.343476 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 00:22:02.344198 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 00:22:02.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.973398 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:22:02.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.976177 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:22:02.976633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:22:02.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.988287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:22:02.988533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:22:02.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.991046 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 00:22:02.991346 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 00:22:02.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.993832 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:22:02.994053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:22:02.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.996191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:22:02.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:02.998601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:22:03.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.024632 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 00:22:03.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.027130 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 13 00:22:03.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.042673 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:22:03.045363 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 13 00:22:03.048998 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 00:22:03.052328 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 00:22:03.056369 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 00:22:03.056417 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:22:03.060040 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 13 00:22:03.063322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:22:03.063506 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:22:03.067330 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 00:22:03.072004 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 00:22:03.074209 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:22:03.075524 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 00:22:03.077400 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:22:03.079775 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:22:03.085411 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 00:22:03.088879 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 00:22:03.089738 systemd-journald[1204]: Time spent on flushing to /var/log/journal/88ff58cce0924c20abe0bfed1244fb42 is 151.868ms for 1105 entries. Dec 13 00:22:03.089738 systemd-journald[1204]: System Journal (/var/log/journal/88ff58cce0924c20abe0bfed1244fb42) is 8M, max 163.5M, 155.5M free. Dec 13 00:22:03.539470 systemd-journald[1204]: Received client request to flush runtime journal. Dec 13 00:22:03.539559 kernel: loop1: detected capacity change from 0 to 219144 Dec 13 00:22:03.539596 kernel: loop2: detected capacity change from 0 to 375256 Dec 13 00:22:03.539620 kernel: loop2: p1 p2 p3 Dec 13 00:22:03.539643 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Dec 13 00:22:03.539665 kernel: loop3: detected capacity change from 0 to 171112 Dec 13 00:22:03.539692 kernel: loop3: p1 p2 p3 Dec 13 00:22:03.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.092947 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:22:03.095226 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 00:22:03.097741 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 00:22:03.100670 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 00:22:03.106528 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 00:22:03.110435 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 13 00:22:03.117057 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 00:22:03.249184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:22:03.541293 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 00:22:03.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.637277 kernel: erofs: (device loop3p1): mounted with root inode @ nid 39. Dec 13 00:22:03.656261 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 00:22:03.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.660000 audit: BPF prog-id=18 op=LOAD Dec 13 00:22:03.660000 audit: BPF prog-id=19 op=LOAD Dec 13 00:22:03.660000 audit: BPF prog-id=20 op=LOAD Dec 13 00:22:03.661722 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 13 00:22:03.664268 kernel: loop4: detected capacity change from 0 to 219144 Dec 13 00:22:03.665000 audit: BPF prog-id=21 op=LOAD Dec 13 00:22:03.666875 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:22:03.677720 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:22:03.680000 audit: BPF prog-id=22 op=LOAD Dec 13 00:22:03.681000 audit: BPF prog-id=23 op=LOAD Dec 13 00:22:03.681000 audit: BPF prog-id=24 op=LOAD Dec 13 00:22:03.681968 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 13 00:22:03.683000 audit: BPF prog-id=25 op=LOAD Dec 13 00:22:03.683000 audit: BPF prog-id=26 op=LOAD Dec 13 00:22:03.683000 audit: BPF prog-id=27 op=LOAD Dec 13 00:22:03.684551 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 00:22:03.719259 kernel: loop5: detected capacity change from 0 to 375256 Dec 13 00:22:03.721257 kernel: loop5: p1 p2 p3 Dec 13 00:22:03.722009 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Dec 13 00:22:03.722036 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Dec 13 00:22:03.729386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:22:03.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.741919 systemd-nsresourced[1283]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 13 00:22:03.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.744460 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 13 00:22:03.747727 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 00:22:03.760051 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:22:03.760151 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:22:03.760180 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:22:03.762843 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:22:03.762877 (sd-merge)[1279]: device-mapper: reload ioctl on c81b0b335c4f741d8803812340292f37f57a6bdf618683fbcdb11178b8725544-verity (253:1) failed: Invalid argument Dec 13 00:22:03.770319 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:22:03.823578 systemd-oomd[1280]: No swap; memory pressure usage will be degraded Dec 13 00:22:03.824479 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 13 00:22:03.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.827225 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 13 00:22:03.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.841450 systemd-resolved[1281]: Positive Trust Anchors: Dec 13 00:22:03.841466 systemd-resolved[1281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:22:03.841471 systemd-resolved[1281]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:22:03.841503 systemd-resolved[1281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:22:03.845918 systemd-resolved[1281]: Defaulting to hostname 'linux'. Dec 13 00:22:03.847515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:22:03.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:03.868909 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:22:04.333906 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 00:22:04.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:04.336000 audit: BPF prog-id=8 op=UNLOAD Dec 13 00:22:04.336000 audit: BPF prog-id=7 op=UNLOAD Dec 13 00:22:04.336000 audit: BPF prog-id=28 op=LOAD Dec 13 00:22:04.336000 audit: BPF prog-id=29 op=LOAD Dec 13 00:22:04.337820 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:22:04.374773 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 00:22:04.380128 systemd-udevd[1306]: Using default interface naming scheme 'v257'. Dec 13 00:22:04.402305 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:22:04.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:04.410000 audit: BPF prog-id=30 op=LOAD Dec 13 00:22:04.411579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:22:04.500607 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 00:22:04.545286 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 00:22:04.571655 systemd-networkd[1316]: lo: Link UP Dec 13 00:22:04.571666 systemd-networkd[1316]: lo: Gained carrier Dec 13 00:22:04.573278 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 00:22:04.575433 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:22:04.576545 systemd-networkd[1316]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:22:04.576639 systemd-networkd[1316]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:22:04.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:04.578231 systemd-networkd[1316]: eth0: Link UP Dec 13 00:22:04.585612 kernel: ACPI: button: Power Button [PWRF] Dec 13 00:22:04.578301 systemd[1]: Reached target network.target - Network. Dec 13 00:22:04.578482 systemd-networkd[1316]: eth0: Gained carrier Dec 13 00:22:04.578502 systemd-networkd[1316]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:22:04.582256 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 13 00:22:04.590600 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 00:22:04.595402 systemd-networkd[1316]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:22:04.629681 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 00:22:04.630131 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 00:22:04.650770 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 13 00:22:04.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:04.775757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:22:04.782300 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 00:22:04.816091 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 00:22:04.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:04.859357 kernel: kvm_amd: TSC scaling supported Dec 13 00:22:04.859421 kernel: kvm_amd: Nested Virtualization enabled Dec 13 00:22:04.859436 kernel: kvm_amd: Nested Paging enabled Dec 13 00:22:04.859450 kernel: kvm_amd: LBR virtualization supported Dec 13 00:22:04.861937 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 00:22:04.861983 kernel: kvm_amd: Virtual GIF supported Dec 13 00:22:04.883338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:22:04.909259 kernel: EDAC MC: Ver: 3.0.0 Dec 13 00:22:04.922261 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 13 00:22:04.933274 kernel: loop6: detected capacity change from 0 to 171112 Dec 13 00:22:04.935324 kernel: loop6: p1 p2 p3 Dec 13 00:22:04.947640 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:22:04.947696 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:22:04.947712 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:22:04.948902 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:22:04.948958 (sd-merge)[1279]: device-mapper: reload ioctl on af67e6a29067aeda0590a0009488436dd8f718bac6be743160aad6f147c2927f-verity (253:2) failed: Invalid argument Dec 13 00:22:04.956264 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:22:04.982270 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 13 00:22:04.982656 (sd-merge)[1279]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 13 00:22:04.986836 (sd-merge)[1279]: Merged extensions into '/usr'. Dec 13 00:22:04.991472 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 00:22:04.991492 systemd[1]: Reloading... Dec 13 00:22:05.076772 zram_generator::config[1409]: No configuration found. Dec 13 00:22:05.384917 systemd[1]: Reloading finished in 392 ms. Dec 13 00:22:05.420922 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 00:22:05.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.423813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:22:05.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.482592 systemd[1]: Starting ensure-sysext.service... Dec 13 00:22:05.485683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:22:05.489000 audit: BPF prog-id=31 op=LOAD Dec 13 00:22:05.489000 audit: BPF prog-id=25 op=UNLOAD Dec 13 00:22:05.489000 audit: BPF prog-id=32 op=LOAD Dec 13 00:22:05.489000 audit: BPF prog-id=33 op=LOAD Dec 13 00:22:05.489000 audit: BPF prog-id=26 op=UNLOAD Dec 13 00:22:05.489000 audit: BPF prog-id=27 op=UNLOAD Dec 13 00:22:05.490000 audit: BPF prog-id=34 op=LOAD Dec 13 00:22:05.490000 audit: BPF prog-id=30 op=UNLOAD Dec 13 00:22:05.491000 audit: BPF prog-id=35 op=LOAD Dec 13 00:22:05.491000 audit: BPF prog-id=18 op=UNLOAD Dec 13 00:22:05.492000 audit: BPF prog-id=36 op=LOAD Dec 13 00:22:05.492000 audit: BPF prog-id=37 op=LOAD Dec 13 00:22:05.492000 audit: BPF prog-id=19 op=UNLOAD Dec 13 00:22:05.492000 audit: BPF prog-id=20 op=UNLOAD Dec 13 00:22:05.494000 audit: BPF prog-id=38 op=LOAD Dec 13 00:22:05.494000 audit: BPF prog-id=22 op=UNLOAD Dec 13 00:22:05.494000 audit: BPF prog-id=39 op=LOAD Dec 13 00:22:05.494000 audit: BPF prog-id=40 op=LOAD Dec 13 00:22:05.494000 audit: BPF prog-id=23 op=UNLOAD Dec 13 00:22:05.494000 audit: BPF prog-id=24 op=UNLOAD Dec 13 00:22:05.496000 audit: BPF prog-id=41 op=LOAD Dec 13 00:22:05.496000 audit: BPF prog-id=15 op=UNLOAD Dec 13 00:22:05.496000 audit: BPF prog-id=42 op=LOAD Dec 13 00:22:05.496000 audit: BPF prog-id=43 op=LOAD Dec 13 00:22:05.496000 audit: BPF prog-id=16 op=UNLOAD Dec 13 00:22:05.496000 audit: BPF prog-id=17 op=UNLOAD Dec 13 00:22:05.498000 audit: BPF prog-id=44 op=LOAD Dec 13 00:22:05.498000 audit: BPF prog-id=21 op=UNLOAD Dec 13 00:22:05.498000 audit: BPF prog-id=45 op=LOAD Dec 13 00:22:05.499000 audit: BPF prog-id=46 op=LOAD Dec 13 00:22:05.499000 audit: BPF prog-id=28 op=UNLOAD Dec 13 00:22:05.499000 audit: BPF prog-id=29 op=UNLOAD Dec 13 00:22:05.507473 systemd[1]: Reload requested from client PID 1443 ('systemctl') (unit ensure-sysext.service)... Dec 13 00:22:05.507495 systemd[1]: Reloading... Dec 13 00:22:05.507554 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 13 00:22:05.507592 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 13 00:22:05.507920 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 00:22:05.509386 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. Dec 13 00:22:05.509464 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. Dec 13 00:22:05.516019 systemd-tmpfiles[1444]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:22:05.516031 systemd-tmpfiles[1444]: Skipping /boot Dec 13 00:22:05.527936 systemd-tmpfiles[1444]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:22:05.527952 systemd-tmpfiles[1444]: Skipping /boot Dec 13 00:22:05.579334 zram_generator::config[1481]: No configuration found. Dec 13 00:22:05.832746 systemd[1]: Reloading finished in 324 ms. Dec 13 00:22:05.862000 audit: BPF prog-id=47 op=LOAD Dec 13 00:22:05.862000 audit: BPF prog-id=41 op=UNLOAD Dec 13 00:22:05.862000 audit: BPF prog-id=48 op=LOAD Dec 13 00:22:05.862000 audit: BPF prog-id=49 op=LOAD Dec 13 00:22:05.862000 audit: BPF prog-id=42 op=UNLOAD Dec 13 00:22:05.862000 audit: BPF prog-id=43 op=UNLOAD Dec 13 00:22:05.863000 audit: BPF prog-id=50 op=LOAD Dec 13 00:22:05.863000 audit: BPF prog-id=38 op=UNLOAD Dec 13 00:22:05.863000 audit: BPF prog-id=51 op=LOAD Dec 13 00:22:05.863000 audit: BPF prog-id=52 op=LOAD Dec 13 00:22:05.863000 audit: BPF prog-id=39 op=UNLOAD Dec 13 00:22:05.863000 audit: BPF prog-id=40 op=UNLOAD Dec 13 00:22:05.864000 audit: BPF prog-id=53 op=LOAD Dec 13 00:22:05.873000 audit: BPF prog-id=34 op=UNLOAD Dec 13 00:22:05.873000 audit: BPF prog-id=54 op=LOAD Dec 13 00:22:05.873000 audit: BPF prog-id=55 op=LOAD Dec 13 00:22:05.873000 audit: BPF prog-id=45 op=UNLOAD Dec 13 00:22:05.874000 audit: BPF prog-id=46 op=UNLOAD Dec 13 00:22:05.875000 audit: BPF prog-id=56 op=LOAD Dec 13 00:22:05.875000 audit: BPF prog-id=44 op=UNLOAD Dec 13 00:22:05.876000 audit: BPF prog-id=57 op=LOAD Dec 13 00:22:05.876000 audit: BPF prog-id=35 op=UNLOAD Dec 13 00:22:05.876000 audit: BPF prog-id=58 op=LOAD Dec 13 00:22:05.876000 audit: BPF prog-id=59 op=LOAD Dec 13 00:22:05.876000 audit: BPF prog-id=36 op=UNLOAD Dec 13 00:22:05.876000 audit: BPF prog-id=37 op=UNLOAD Dec 13 00:22:05.877000 audit: BPF prog-id=60 op=LOAD Dec 13 00:22:05.877000 audit: BPF prog-id=31 op=UNLOAD Dec 13 00:22:05.877000 audit: BPF prog-id=61 op=LOAD Dec 13 00:22:05.877000 audit: BPF prog-id=62 op=LOAD Dec 13 00:22:05.877000 audit: BPF prog-id=32 op=UNLOAD Dec 13 00:22:05.877000 audit: BPF prog-id=33 op=UNLOAD Dec 13 00:22:05.882593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:22:05.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.896667 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:22:05.902425 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 00:22:05.910647 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 00:22:05.914522 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 00:22:05.918565 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 00:22:05.924831 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:22:05.925039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:22:05.929310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:22:05.939335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:22:05.945670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:22:05.948130 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:22:05.948519 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:22:05.948653 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:22:05.948785 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:22:05.952000 audit[1524]: SYSTEM_BOOT pid=1524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.954956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:22:05.958586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:22:05.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.963681 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:22:05.964051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:22:05.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.967642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:22:05.967961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:22:05.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:05.979203 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:22:05.980467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:22:05.982559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:22:05.985000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 00:22:05.985000 audit[1546]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5a6bd970 a2=420 a3=0 items=0 ppid=1516 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.985000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:22:05.985676 augenrules[1546]: No rules Dec 13 00:22:05.986748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:22:05.990462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:22:05.992318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:22:05.992510 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:22:05.992606 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:22:05.992703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:22:05.994171 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:22:05.995166 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:22:06.003468 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 00:22:06.008455 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 00:22:06.011754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:22:06.012779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:22:06.015839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:22:06.016213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:22:06.019513 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:22:06.019813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:22:06.034297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:22:06.037268 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:22:06.039262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:22:06.067612 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:22:06.072656 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:22:06.077218 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:22:06.087964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:22:06.090161 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:22:06.092054 augenrules[1560]: /sbin/augenrules: No change Dec 13 00:22:06.090420 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:22:06.090527 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:22:06.090651 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:22:06.092790 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 00:22:06.098168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:22:06.098879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:22:06.101000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:22:06.101000 audit[1581]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdeea9410 a2=420 a3=0 items=0 ppid=1560 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.101000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:22:06.101491 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:22:06.101739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:22:06.101000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 00:22:06.101000 audit[1581]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdeeab8a0 a2=420 a3=0 items=0 ppid=1560 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.101000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:22:06.102305 augenrules[1581]: No rules Dec 13 00:22:06.104145 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:22:06.104458 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:22:06.106670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:22:06.106923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:22:06.109737 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:22:06.110022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:22:06.117445 systemd[1]: Finished ensure-sysext.service. Dec 13 00:22:06.123079 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:22:06.123144 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:22:06.125311 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 00:22:06.127173 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 00:22:06.315343 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 00:22:06.317815 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 00:22:06.320325 systemd-timesyncd[1593]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 00:22:06.320378 systemd-timesyncd[1593]: Initial clock synchronization to Sat 2025-12-13 00:22:06.143804 UTC. Dec 13 00:22:06.392563 systemd-networkd[1316]: eth0: Gained IPv6LL Dec 13 00:22:06.395866 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 00:22:06.398277 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 00:22:06.832037 ldconfig[1518]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 00:22:06.840144 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 00:22:06.844394 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 00:22:06.914635 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 00:22:06.917265 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:22:06.919547 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 00:22:06.921894 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 00:22:06.924428 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 13 00:22:06.926952 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 00:22:06.929205 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 00:22:06.932039 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 13 00:22:06.934678 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 13 00:22:06.936836 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 00:22:06.939092 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 00:22:06.939146 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:22:06.940777 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:22:06.943711 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 00:22:06.948205 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 00:22:06.954711 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 13 00:22:06.957359 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 13 00:22:06.959503 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 13 00:22:06.964916 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 00:22:06.967159 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 13 00:22:06.970111 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 00:22:06.972873 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:22:06.974656 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:22:06.976220 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:22:06.976269 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:22:06.977666 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 00:22:06.980834 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 00:22:06.983752 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 00:22:07.001977 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 00:22:07.005319 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 00:22:07.008086 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 00:22:07.009659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 00:22:07.012540 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 13 00:22:07.015542 jq[1607]: false Dec 13 00:22:07.017351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:22:07.021291 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 00:22:07.025432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 00:22:07.029253 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Refreshing passwd entry cache Dec 13 00:22:07.028276 oslogin_cache_refresh[1609]: Refreshing passwd entry cache Dec 13 00:22:07.030370 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 00:22:07.031907 extend-filesystems[1608]: Found /dev/vda6 Dec 13 00:22:07.037222 extend-filesystems[1608]: Found /dev/vda9 Dec 13 00:22:07.040828 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Failure getting users, quitting Dec 13 00:22:07.040828 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:22:07.040828 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Refreshing group entry cache Dec 13 00:22:07.036406 oslogin_cache_refresh[1609]: Failure getting users, quitting Dec 13 00:22:07.038380 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 00:22:07.036429 oslogin_cache_refresh[1609]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:22:07.036500 oslogin_cache_refresh[1609]: Refreshing group entry cache Dec 13 00:22:07.043642 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 00:22:07.044830 extend-filesystems[1608]: Checking size of /dev/vda9 Dec 13 00:22:07.051397 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Failure getting groups, quitting Dec 13 00:22:07.051397 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:22:07.045423 oslogin_cache_refresh[1609]: Failure getting groups, quitting Dec 13 00:22:07.045438 oslogin_cache_refresh[1609]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:22:07.058114 extend-filesystems[1608]: Resized partition /dev/vda9 Dec 13 00:22:07.061295 extend-filesystems[1632]: resize2fs 1.47.3 (8-Jul-2025) Dec 13 00:22:07.067383 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 13 00:22:07.069664 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 00:22:07.071508 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 00:22:07.072092 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 00:22:07.076468 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 00:22:07.080451 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 00:22:07.097089 jq[1642]: true Dec 13 00:22:07.098182 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 00:22:07.101179 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 00:22:07.101496 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 00:22:07.101876 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 13 00:22:07.102137 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 13 00:22:07.107700 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 00:22:07.107981 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 00:22:07.111253 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 13 00:22:07.111889 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 00:22:07.139051 update_engine[1639]: I20251213 00:22:07.135740 1639 main.cc:92] Flatcar Update Engine starting Dec 13 00:22:07.113876 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 00:22:07.114160 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 00:22:07.141768 extend-filesystems[1632]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 00:22:07.141768 extend-filesystems[1632]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 00:22:07.141768 extend-filesystems[1632]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 13 00:22:07.149063 extend-filesystems[1608]: Resized filesystem in /dev/vda9 Dec 13 00:22:07.144742 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 00:22:07.147522 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 00:22:07.168955 jq[1651]: true Dec 13 00:22:07.174748 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 00:22:07.175102 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 00:22:07.199734 tar[1649]: linux-amd64/LICENSE Dec 13 00:22:07.200263 tar[1649]: linux-amd64/helm Dec 13 00:22:07.227533 sshd_keygen[1641]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 00:22:07.213604 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 00:22:07.266253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 00:22:07.271175 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 00:22:07.281712 bash[1696]: Updated "/home/core/.ssh/authorized_keys" Dec 13 00:22:07.282295 systemd-logind[1637]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 00:22:07.282326 systemd-logind[1637]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 00:22:07.283862 systemd-logind[1637]: New seat seat0. Dec 13 00:22:07.285882 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 00:22:07.289719 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 00:22:07.291432 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 00:22:07.304108 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 00:22:07.304465 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 00:22:07.310155 dbus-daemon[1605]: [system] SELinux support is enabled Dec 13 00:22:07.311165 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 00:22:07.314547 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 00:22:07.318179 update_engine[1639]: I20251213 00:22:07.318006 1639 update_check_scheduler.cc:74] Next update check in 2m25s Dec 13 00:22:07.321367 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 00:22:07.321401 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 00:22:07.325683 dbus-daemon[1605]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 00:22:07.326345 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 00:22:07.326362 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 00:22:07.332966 systemd[1]: Started update-engine.service - Update Engine. Dec 13 00:22:07.376705 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 00:22:07.386952 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 00:22:07.395596 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 00:22:07.399283 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 00:22:07.401320 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 00:22:07.515295 locksmithd[1706]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 00:22:07.639135 containerd[1654]: time="2025-12-13T00:22:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 13 00:22:07.640625 containerd[1654]: time="2025-12-13T00:22:07.640317834Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 13 00:22:07.657332 containerd[1654]: time="2025-12-13T00:22:07.657231152Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.82µs" Dec 13 00:22:07.657332 containerd[1654]: time="2025-12-13T00:22:07.657309859Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 13 00:22:07.657501 containerd[1654]: time="2025-12-13T00:22:07.657372801Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 13 00:22:07.657501 containerd[1654]: time="2025-12-13T00:22:07.657389261Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 13 00:22:07.657850 containerd[1654]: time="2025-12-13T00:22:07.657709144Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 13 00:22:07.657850 containerd[1654]: time="2025-12-13T00:22:07.657741144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:22:07.657850 containerd[1654]: time="2025-12-13T00:22:07.657828345Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:22:07.657850 containerd[1654]: time="2025-12-13T00:22:07.657845521Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:22:07.658272 containerd[1654]: time="2025-12-13T00:22:07.658224084Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:22:07.658272 containerd[1654]: time="2025-12-13T00:22:07.658265813Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:22:07.658362 containerd[1654]: time="2025-12-13T00:22:07.658284556Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:22:07.658362 containerd[1654]: time="2025-12-13T00:22:07.658296795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 13 00:22:07.658820 containerd[1654]: time="2025-12-13T00:22:07.658593240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 13 00:22:07.658820 containerd[1654]: time="2025-12-13T00:22:07.658718447Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 13 00:22:07.659127 containerd[1654]: time="2025-12-13T00:22:07.659084253Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:22:07.659174 containerd[1654]: time="2025-12-13T00:22:07.659157797Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:22:07.659227 containerd[1654]: time="2025-12-13T00:22:07.659176814Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 13 00:22:07.660014 containerd[1654]: time="2025-12-13T00:22:07.659864932Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 13 00:22:07.660680 containerd[1654]: time="2025-12-13T00:22:07.660649941Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 13 00:22:07.660778 containerd[1654]: time="2025-12-13T00:22:07.660747823Z" level=info msg="metadata content store policy set" policy=shared Dec 13 00:22:07.668676 containerd[1654]: time="2025-12-13T00:22:07.668616680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 13 00:22:07.668725 containerd[1654]: time="2025-12-13T00:22:07.668699443Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:22:07.668853 containerd[1654]: time="2025-12-13T00:22:07.668823280Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:22:07.668853 containerd[1654]: time="2025-12-13T00:22:07.668846990Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 13 00:22:07.668917 containerd[1654]: time="2025-12-13T00:22:07.668868291Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 13 00:22:07.668938 containerd[1654]: time="2025-12-13T00:22:07.668915184Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 13 00:22:07.668938 containerd[1654]: time="2025-12-13T00:22:07.668933506Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 13 00:22:07.668975 containerd[1654]: time="2025-12-13T00:22:07.668947458Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 13 00:22:07.668975 containerd[1654]: time="2025-12-13T00:22:07.668964017Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 13 00:22:07.669028 containerd[1654]: time="2025-12-13T00:22:07.668981859Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 13 00:22:07.669028 containerd[1654]: time="2025-12-13T00:22:07.668997270Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 13 00:22:07.669028 containerd[1654]: time="2025-12-13T00:22:07.669011145Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 13 00:22:07.669028 containerd[1654]: time="2025-12-13T00:22:07.669023950Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 13 00:22:07.669110 containerd[1654]: time="2025-12-13T00:22:07.669042508Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 13 00:22:07.669249 containerd[1654]: time="2025-12-13T00:22:07.669205310Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 13 00:22:07.669289 containerd[1654]: time="2025-12-13T00:22:07.669272328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 13 00:22:07.669315 containerd[1654]: time="2025-12-13T00:22:07.669292757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 13 00:22:07.669315 containerd[1654]: time="2025-12-13T00:22:07.669307953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 13 00:22:07.669364 containerd[1654]: time="2025-12-13T00:22:07.669322601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 13 00:22:07.669393 containerd[1654]: time="2025-12-13T00:22:07.669361038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 13 00:22:07.669393 containerd[1654]: time="2025-12-13T00:22:07.669379360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 13 00:22:07.669442 containerd[1654]: time="2025-12-13T00:22:07.669402013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 13 00:22:07.669442 containerd[1654]: time="2025-12-13T00:22:07.669417611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 13 00:22:07.669492 containerd[1654]: time="2025-12-13T00:22:07.669455039Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 13 00:22:07.669492 containerd[1654]: time="2025-12-13T00:22:07.669471647Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 13 00:22:07.669851 containerd[1654]: time="2025-12-13T00:22:07.669807432Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 13 00:22:07.670009 containerd[1654]: time="2025-12-13T00:22:07.669972850Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 13 00:22:07.670009 containerd[1654]: time="2025-12-13T00:22:07.670002136Z" level=info msg="Start snapshots syncer" Dec 13 00:22:07.670497 containerd[1654]: time="2025-12-13T00:22:07.670317070Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 13 00:22:07.670767 containerd[1654]: time="2025-12-13T00:22:07.670704343Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 13 00:22:07.671070 containerd[1654]: time="2025-12-13T00:22:07.670779747Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 13 00:22:07.671070 containerd[1654]: time="2025-12-13T00:22:07.670843552Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 13 00:22:07.671329 containerd[1654]: time="2025-12-13T00:22:07.671222290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 13 00:22:07.671329 containerd[1654]: time="2025-12-13T00:22:07.671290190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 13 00:22:07.671329 containerd[1654]: time="2025-12-13T00:22:07.671306288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 13 00:22:07.671396 containerd[1654]: time="2025-12-13T00:22:07.671332419Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 13 00:22:07.671396 containerd[1654]: time="2025-12-13T00:22:07.671348214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 13 00:22:07.671396 containerd[1654]: time="2025-12-13T00:22:07.671362813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 13 00:22:07.671396 containerd[1654]: time="2025-12-13T00:22:07.671376333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 13 00:22:07.671396 containerd[1654]: time="2025-12-13T00:22:07.671388806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 13 00:22:07.671493 containerd[1654]: time="2025-12-13T00:22:07.671401456Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 13 00:22:07.671903 containerd[1654]: time="2025-12-13T00:22:07.671837346Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:22:07.671903 containerd[1654]: time="2025-12-13T00:22:07.671867670Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:22:07.671903 containerd[1654]: time="2025-12-13T00:22:07.671880015Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:22:07.671903 containerd[1654]: time="2025-12-13T00:22:07.671893644Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:22:07.671903 containerd[1654]: time="2025-12-13T00:22:07.671904648Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 13 00:22:07.672018 containerd[1654]: time="2025-12-13T00:22:07.671918991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 13 00:22:07.672018 containerd[1654]: time="2025-12-13T00:22:07.671932356Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 13 00:22:07.672018 containerd[1654]: time="2025-12-13T00:22:07.671955165Z" level=info msg="runtime interface created" Dec 13 00:22:07.672018 containerd[1654]: time="2025-12-13T00:22:07.671962044Z" level=info msg="created NRI interface" Dec 13 00:22:07.672018 containerd[1654]: time="2025-12-13T00:22:07.671972253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 13 00:22:07.672018 containerd[1654]: time="2025-12-13T00:22:07.671985421Z" level=info msg="Connect containerd service" Dec 13 00:22:07.672018 containerd[1654]: time="2025-12-13T00:22:07.672008143Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 00:22:07.674827 containerd[1654]: time="2025-12-13T00:22:07.674775458Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 00:22:07.726707 tar[1649]: linux-amd64/README.md Dec 13 00:22:07.749589 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 00:22:07.825525 containerd[1654]: time="2025-12-13T00:22:07.825453009Z" level=info msg="Start subscribing containerd event" Dec 13 00:22:07.826022 containerd[1654]: time="2025-12-13T00:22:07.825891279Z" level=info msg="Start recovering state" Dec 13 00:22:07.826129 containerd[1654]: time="2025-12-13T00:22:07.826070983Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 00:22:07.826291 containerd[1654]: time="2025-12-13T00:22:07.826185383Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 00:22:07.826797 containerd[1654]: time="2025-12-13T00:22:07.826769105Z" level=info msg="Start event monitor" Dec 13 00:22:07.826867 containerd[1654]: time="2025-12-13T00:22:07.826801232Z" level=info msg="Start cni network conf syncer for default" Dec 13 00:22:07.826867 containerd[1654]: time="2025-12-13T00:22:07.826814410Z" level=info msg="Start streaming server" Dec 13 00:22:07.826867 containerd[1654]: time="2025-12-13T00:22:07.826824090Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 13 00:22:07.826867 containerd[1654]: time="2025-12-13T00:22:07.826831704Z" level=info msg="runtime interface starting up..." Dec 13 00:22:07.826867 containerd[1654]: time="2025-12-13T00:22:07.826838062Z" level=info msg="starting plugins..." Dec 13 00:22:07.827319 containerd[1654]: time="2025-12-13T00:22:07.826853366Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 13 00:22:07.828776 containerd[1654]: time="2025-12-13T00:22:07.828739330Z" level=info msg="containerd successfully booted in 0.190223s" Dec 13 00:22:07.828993 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 00:22:08.207962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:08.210656 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 00:22:08.212771 systemd[1]: Startup finished in 3.630s (kernel) + 7.538s (initrd) + 6.731s (userspace) = 17.900s. Dec 13 00:22:08.213441 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:22:08.659750 kubelet[1745]: E1213 00:22:08.659634 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:22:08.664096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:22:08.664361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:22:08.664863 systemd[1]: kubelet.service: Consumed 1.057s CPU time, 256.1M memory peak. Dec 13 00:22:16.546069 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 00:22:16.547693 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:50218.service - OpenSSH per-connection server daemon (10.0.0.1:50218). Dec 13 00:22:16.636040 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 50218 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:22:16.637975 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:16.645053 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 00:22:16.646197 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 00:22:16.651272 systemd-logind[1637]: New session 1 of user core. Dec 13 00:22:16.679009 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 00:22:16.682444 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 00:22:16.718624 (systemd)[1764]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:16.721903 systemd-logind[1637]: New session 2 of user core. Dec 13 00:22:16.912548 systemd[1764]: Queued start job for default target default.target. Dec 13 00:22:16.924881 systemd[1764]: Created slice app.slice - User Application Slice. Dec 13 00:22:16.924918 systemd[1764]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 13 00:22:16.924932 systemd[1764]: Reached target paths.target - Paths. Dec 13 00:22:16.925017 systemd[1764]: Reached target timers.target - Timers. Dec 13 00:22:16.926721 systemd[1764]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 00:22:16.927846 systemd[1764]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 13 00:22:16.939366 systemd[1764]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 00:22:16.939486 systemd[1764]: Reached target sockets.target - Sockets. Dec 13 00:22:16.943800 systemd[1764]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 13 00:22:16.943981 systemd[1764]: Reached target basic.target - Basic System. Dec 13 00:22:16.944072 systemd[1764]: Reached target default.target - Main User Target. Dec 13 00:22:16.944132 systemd[1764]: Startup finished in 216ms. Dec 13 00:22:16.944224 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 00:22:16.945967 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 00:22:16.975833 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:50232.service - OpenSSH per-connection server daemon (10.0.0.1:50232). Dec 13 00:22:17.051051 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 50232 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:22:17.053606 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:17.059559 systemd-logind[1637]: New session 3 of user core. Dec 13 00:22:17.073548 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 00:22:17.090690 sshd[1782]: Connection closed by 10.0.0.1 port 50232 Dec 13 00:22:17.091489 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:17.102146 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:50232.service: Deactivated successfully. Dec 13 00:22:17.104447 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 00:22:17.105330 systemd-logind[1637]: Session 3 logged out. Waiting for processes to exit. Dec 13 00:22:17.108981 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:50246.service - OpenSSH per-connection server daemon (10.0.0.1:50246). Dec 13 00:22:17.110212 systemd-logind[1637]: Removed session 3. Dec 13 00:22:17.184711 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 50246 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:22:17.186875 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:17.192460 systemd-logind[1637]: New session 4 of user core. Dec 13 00:22:17.203455 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 00:22:17.214602 sshd[1792]: Connection closed by 10.0.0.1 port 50246 Dec 13 00:22:17.215024 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:17.234638 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:50246.service: Deactivated successfully. Dec 13 00:22:17.237164 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 00:22:17.238173 systemd-logind[1637]: Session 4 logged out. Waiting for processes to exit. Dec 13 00:22:17.241797 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:50252.service - OpenSSH per-connection server daemon (10.0.0.1:50252). Dec 13 00:22:17.242778 systemd-logind[1637]: Removed session 4. Dec 13 00:22:17.309532 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 50252 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:22:17.311712 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:17.316834 systemd-logind[1637]: New session 5 of user core. Dec 13 00:22:17.332442 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 00:22:17.349203 sshd[1802]: Connection closed by 10.0.0.1 port 50252 Dec 13 00:22:17.349518 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:17.367139 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:50252.service: Deactivated successfully. Dec 13 00:22:17.369389 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 00:22:17.370227 systemd-logind[1637]: Session 5 logged out. Waiting for processes to exit. Dec 13 00:22:17.373534 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:50266.service - OpenSSH per-connection server daemon (10.0.0.1:50266). Dec 13 00:22:17.374616 systemd-logind[1637]: Removed session 5. Dec 13 00:22:17.425102 sshd[1808]: Accepted publickey for core from 10.0.0.1 port 50266 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:22:17.427137 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:17.432799 systemd-logind[1637]: New session 6 of user core. Dec 13 00:22:17.446601 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 00:22:17.471821 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 00:22:17.472186 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:22:17.492168 sudo[1813]: pam_unix(sudo:session): session closed for user root Dec 13 00:22:17.494316 sshd[1812]: Connection closed by 10.0.0.1 port 50266 Dec 13 00:22:17.494732 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:17.508023 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:50266.service: Deactivated successfully. Dec 13 00:22:17.509889 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 00:22:17.510731 systemd-logind[1637]: Session 6 logged out. Waiting for processes to exit. Dec 13 00:22:17.513795 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:50274.service - OpenSSH per-connection server daemon (10.0.0.1:50274). Dec 13 00:22:17.514376 systemd-logind[1637]: Removed session 6. Dec 13 00:22:17.566876 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 50274 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:22:17.568604 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:17.573412 systemd-logind[1637]: New session 7 of user core. Dec 13 00:22:17.584380 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 00:22:17.599105 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 00:22:17.599579 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:22:17.612614 sudo[1827]: pam_unix(sudo:session): session closed for user root Dec 13 00:22:17.620627 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 00:22:17.621040 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:22:17.630062 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:22:17.678000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:22:17.680371 augenrules[1851]: No rules Dec 13 00:22:17.681356 kernel: kauditd_printk_skb: 149 callbacks suppressed Dec 13 00:22:17.681414 kernel: audit: type=1305 audit(1765585337.678:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:22:17.682122 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:22:17.682515 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:22:17.683662 sudo[1826]: pam_unix(sudo:session): session closed for user root Dec 13 00:22:17.678000 audit[1851]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffebbb0cd40 a2=420 a3=0 items=0 ppid=1832 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:17.690496 kernel: audit: type=1300 audit(1765585337.678:232): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffebbb0cd40 a2=420 a3=0 items=0 ppid=1832 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:17.690638 sshd[1825]: Connection closed by 10.0.0.1 port 50274 Dec 13 00:22:17.678000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:22:17.691136 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:17.693286 kernel: audit: type=1327 audit(1765585337.678:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:22:17.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.697661 kernel: audit: type=1130 audit(1765585337.680:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.697726 kernel: audit: type=1131 audit(1765585337.680:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.680000 audit[1826]: USER_END pid=1826 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.707336 kernel: audit: type=1106 audit(1765585337.680:235): pid=1826 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.707400 kernel: audit: type=1104 audit(1765585337.680:236): pid=1826 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.680000 audit[1826]: CRED_DISP pid=1826 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.689000 audit[1820]: USER_END pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:17.719079 kernel: audit: type=1106 audit(1765585337.689:237): pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:17.719137 kernel: audit: type=1104 audit(1765585337.689:238): pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:17.689000 audit[1820]: CRED_DISP pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:17.730221 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:50274.service: Deactivated successfully. Dec 13 00:22:17.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.91:22-10.0.0.1:50274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.732279 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 00:22:17.733102 systemd-logind[1637]: Session 7 logged out. Waiting for processes to exit. Dec 13 00:22:17.735972 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:50284.service - OpenSSH per-connection server daemon (10.0.0.1:50284). Dec 13 00:22:17.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.91:22-10.0.0.1:50284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.736272 kernel: audit: type=1131 audit(1765585337.729:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.91:22-10.0.0.1:50274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.736633 systemd-logind[1637]: Removed session 7. Dec 13 00:22:17.795000 audit[1860]: USER_ACCT pid=1860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:17.796381 sshd[1860]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:22:17.797000 audit[1860]: CRED_ACQ pid=1860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:17.797000 audit[1860]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff59c459d0 a2=3 a3=0 items=0 ppid=1 pid=1860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:17.797000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:17.798372 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:17.803569 systemd-logind[1637]: New session 8 of user core. Dec 13 00:22:17.817396 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 00:22:17.818000 audit[1860]: USER_START pid=1860 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:17.821000 audit[1864]: CRED_ACQ pid=1864 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:17.832000 audit[1865]: USER_ACCT pid=1865 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.833793 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 00:22:17.832000 audit[1865]: CRED_REFR pid=1865 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:22:17.834181 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:22:17.832000 audit[1865]: USER_START pid=1865 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:22:18.362545 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 00:22:18.383630 (dockerd)[1888]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 00:22:18.723005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 00:22:18.725473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:22:18.995020 dockerd[1888]: time="2025-12-13T00:22:18.994971998Z" level=info msg="Starting up" Dec 13 00:22:19.002068 dockerd[1888]: time="2025-12-13T00:22:19.002042806Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 13 00:22:19.010338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:19.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:19.023634 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:22:19.026369 dockerd[1888]: time="2025-12-13T00:22:19.026299681Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 13 00:22:19.482906 kubelet[1909]: E1213 00:22:19.482833 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:22:19.489815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:22:19.490056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:22:19.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:22:19.490576 systemd[1]: kubelet.service: Consumed 464ms CPU time, 110.9M memory peak. Dec 13 00:22:19.538553 dockerd[1888]: time="2025-12-13T00:22:19.538479965Z" level=info msg="Loading containers: start." Dec 13 00:22:19.550272 kernel: Initializing XFRM netlink socket Dec 13 00:22:19.621000 audit[1958]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.621000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe1a73ea10 a2=0 a3=0 items=0 ppid=1888 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 00:22:19.623000 audit[1960]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.623000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe1affd4f0 a2=0 a3=0 items=0 ppid=1888 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 00:22:19.626000 audit[1962]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.626000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc91a0bc40 a2=0 a3=0 items=0 ppid=1888 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.626000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 00:22:19.628000 audit[1964]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.628000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee4f64b70 a2=0 a3=0 items=0 ppid=1888 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.628000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 00:22:19.630000 audit[1966]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.630000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff33373640 a2=0 a3=0 items=0 ppid=1888 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.630000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 00:22:19.632000 audit[1968]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.632000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff6027a6c0 a2=0 a3=0 items=0 ppid=1888 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:22:19.634000 audit[1970]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.634000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffed5ce4a40 a2=0 a3=0 items=0 ppid=1888 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:22:19.637000 audit[1972]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.637000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdd15b52f0 a2=0 a3=0 items=0 ppid=1888 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 00:22:19.671000 audit[1975]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.671000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc2391d7c0 a2=0 a3=0 items=0 ppid=1888 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.671000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 00:22:19.673000 audit[1977]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.673000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcd8801240 a2=0 a3=0 items=0 ppid=1888 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 00:22:19.676000 audit[1979]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.676000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc2b254340 a2=0 a3=0 items=0 ppid=1888 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 00:22:19.678000 audit[1981]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.678000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd957d9070 a2=0 a3=0 items=0 ppid=1888 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:22:19.681000 audit[1983]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.681000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd8ee192c0 a2=0 a3=0 items=0 ppid=1888 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 00:22:19.729000 audit[2013]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.729000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe4d025d40 a2=0 a3=0 items=0 ppid=1888 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 00:22:19.731000 audit[2015]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.731000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff229c42f0 a2=0 a3=0 items=0 ppid=1888 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 00:22:19.734000 audit[2017]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.734000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffced89300 a2=0 a3=0 items=0 ppid=1888 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 00:22:19.737000 audit[2019]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.737000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1964acd0 a2=0 a3=0 items=0 ppid=1888 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 00:22:19.739000 audit[2021]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.739000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff483bf200 a2=0 a3=0 items=0 ppid=1888 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 00:22:19.741000 audit[2023]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.741000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe8a7d800 a2=0 a3=0 items=0 ppid=1888 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:22:19.744000 audit[2025]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.744000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe83dfaec0 a2=0 a3=0 items=0 ppid=1888 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:22:19.746000 audit[2027]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.746000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcbcb0be50 a2=0 a3=0 items=0 ppid=1888 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 00:22:19.749000 audit[2029]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.749000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc3dfc2290 a2=0 a3=0 items=0 ppid=1888 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.749000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 13 00:22:19.752000 audit[2031]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.752000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffc675c2b0 a2=0 a3=0 items=0 ppid=1888 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 00:22:19.754000 audit[2033]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.754000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdc8ac8740 a2=0 a3=0 items=0 ppid=1888 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 00:22:19.757000 audit[2035]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.757000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc041693b0 a2=0 a3=0 items=0 ppid=1888 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:22:19.760000 audit[2037]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.760000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffde4829670 a2=0 a3=0 items=0 ppid=1888 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 00:22:19.768000 audit[2042]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.768000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffed8411e0 a2=0 a3=0 items=0 ppid=1888 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 00:22:19.770000 audit[2044]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.770000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcf8e50a30 a2=0 a3=0 items=0 ppid=1888 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 00:22:19.773000 audit[2046]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.773000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdf1fe8d80 a2=0 a3=0 items=0 ppid=1888 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.773000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 00:22:19.775000 audit[2048]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.775000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff18e4dd30 a2=0 a3=0 items=0 ppid=1888 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 00:22:19.778000 audit[2050]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.778000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffb70a7ff0 a2=0 a3=0 items=0 ppid=1888 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.778000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 00:22:19.780000 audit[2052]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:19.780000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd489fd1b0 a2=0 a3=0 items=0 ppid=1888 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.780000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 00:22:19.802000 audit[2056]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.802000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc02d9fdf0 a2=0 a3=0 items=0 ppid=1888 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.802000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 00:22:19.805000 audit[2058]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.805000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd0e5d5a30 a2=0 a3=0 items=0 ppid=1888 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 00:22:19.817000 audit[2066]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.817000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff9201a9b0 a2=0 a3=0 items=0 ppid=1888 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 13 00:22:19.827000 audit[2072]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.827000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff75bfa0b0 a2=0 a3=0 items=0 ppid=1888 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 13 00:22:19.830000 audit[2074]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.830000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd779a4db0 a2=0 a3=0 items=0 ppid=1888 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 00:22:19.833000 audit[2076]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.833000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc4a195a40 a2=0 a3=0 items=0 ppid=1888 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.833000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 13 00:22:19.835000 audit[2078]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.835000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffe42d7030 a2=0 a3=0 items=0 ppid=1888 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.835000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:22:19.838000 audit[2080]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:19.838000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffebbfcc9c0 a2=0 a3=0 items=0 ppid=1888 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:19.838000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 00:22:19.839125 systemd-networkd[1316]: docker0: Link UP Dec 13 00:22:19.845817 dockerd[1888]: time="2025-12-13T00:22:19.845707139Z" level=info msg="Loading containers: done." Dec 13 00:22:19.888994 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck515377963-merged.mount: Deactivated successfully. Dec 13 00:22:19.893632 dockerd[1888]: time="2025-12-13T00:22:19.893587570Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 00:22:19.893816 dockerd[1888]: time="2025-12-13T00:22:19.893691393Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 13 00:22:19.893816 dockerd[1888]: time="2025-12-13T00:22:19.893772413Z" level=info msg="Initializing buildkit" Dec 13 00:22:19.932953 dockerd[1888]: time="2025-12-13T00:22:19.932909192Z" level=info msg="Completed buildkit initialization" Dec 13 00:22:19.940681 dockerd[1888]: time="2025-12-13T00:22:19.940618680Z" level=info msg="Daemon has completed initialization" Dec 13 00:22:19.940818 dockerd[1888]: time="2025-12-13T00:22:19.940708409Z" level=info msg="API listen on /run/docker.sock" Dec 13 00:22:19.941049 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 00:22:19.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:20.821137 containerd[1654]: time="2025-12-13T00:22:20.821062210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 13 00:22:22.081822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885029685.mount: Deactivated successfully. Dec 13 00:22:23.213908 containerd[1654]: time="2025-12-13T00:22:23.213832618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:23.214831 containerd[1654]: time="2025-12-13T00:22:23.214737818Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25991649" Dec 13 00:22:23.215911 containerd[1654]: time="2025-12-13T00:22:23.215853485Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:23.218575 containerd[1654]: time="2025-12-13T00:22:23.218530073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:23.219777 containerd[1654]: time="2025-12-13T00:22:23.219721174Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.398596046s" Dec 13 00:22:23.219777 containerd[1654]: time="2025-12-13T00:22:23.219774305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 13 00:22:23.221012 containerd[1654]: time="2025-12-13T00:22:23.220965736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 13 00:22:24.735522 containerd[1654]: time="2025-12-13T00:22:24.735442981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:24.736274 containerd[1654]: time="2025-12-13T00:22:24.736180234Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Dec 13 00:22:24.737575 containerd[1654]: time="2025-12-13T00:22:24.737502391Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:24.740128 containerd[1654]: time="2025-12-13T00:22:24.740072396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:24.741164 containerd[1654]: time="2025-12-13T00:22:24.741114655Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.520099332s" Dec 13 00:22:24.741164 containerd[1654]: time="2025-12-13T00:22:24.741159157Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 13 00:22:24.741956 containerd[1654]: time="2025-12-13T00:22:24.741718692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 13 00:22:27.177130 containerd[1654]: time="2025-12-13T00:22:27.177027925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:27.178970 containerd[1654]: time="2025-12-13T00:22:27.178918737Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Dec 13 00:22:27.180781 containerd[1654]: time="2025-12-13T00:22:27.180666359Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:27.183708 containerd[1654]: time="2025-12-13T00:22:27.183639600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:27.185278 containerd[1654]: time="2025-12-13T00:22:27.185199046Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 2.443429964s" Dec 13 00:22:27.185278 containerd[1654]: time="2025-12-13T00:22:27.185274353Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 13 00:22:27.185967 containerd[1654]: time="2025-12-13T00:22:27.185884956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 13 00:22:29.565352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 00:22:29.567766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:22:29.776121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:29.779298 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 13 00:22:29.779423 kernel: audit: type=1130 audit(1765585349.775:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:29.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:29.783029 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:22:29.836432 kubelet[2204]: E1213 00:22:29.836298 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:22:29.840849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:22:29.841095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:22:29.841555 systemd[1]: kubelet.service: Consumed 236ms CPU time, 109.4M memory peak. Dec 13 00:22:29.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:22:29.846270 kernel: audit: type=1131 audit(1765585349.840:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:22:29.909070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510615724.mount: Deactivated successfully. Dec 13 00:22:30.664745 containerd[1654]: time="2025-12-13T00:22:30.664655268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:30.688734 containerd[1654]: time="2025-12-13T00:22:30.688664024Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25962930" Dec 13 00:22:30.700643 containerd[1654]: time="2025-12-13T00:22:30.700595826Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:30.713939 containerd[1654]: time="2025-12-13T00:22:30.713875498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:30.714558 containerd[1654]: time="2025-12-13T00:22:30.714510410Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 3.528587895s" Dec 13 00:22:30.714558 containerd[1654]: time="2025-12-13T00:22:30.714540666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 13 00:22:30.715146 containerd[1654]: time="2025-12-13T00:22:30.715072080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 13 00:22:32.501226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788250464.mount: Deactivated successfully. Dec 13 00:22:33.945982 containerd[1654]: time="2025-12-13T00:22:33.945881962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:33.947476 containerd[1654]: time="2025-12-13T00:22:33.947363798Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22127880" Dec 13 00:22:33.949489 containerd[1654]: time="2025-12-13T00:22:33.949401596Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:33.953584 containerd[1654]: time="2025-12-13T00:22:33.953512314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:33.956128 containerd[1654]: time="2025-12-13T00:22:33.956033378Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.24088966s" Dec 13 00:22:33.956128 containerd[1654]: time="2025-12-13T00:22:33.956107155Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 13 00:22:33.958062 containerd[1654]: time="2025-12-13T00:22:33.957966240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 13 00:22:34.523070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792299100.mount: Deactivated successfully. Dec 13 00:22:34.531090 containerd[1654]: time="2025-12-13T00:22:34.530989363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:34.532036 containerd[1654]: time="2025-12-13T00:22:34.532002157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 13 00:22:34.533636 containerd[1654]: time="2025-12-13T00:22:34.533560397Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:34.535554 containerd[1654]: time="2025-12-13T00:22:34.535504360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:34.536398 containerd[1654]: time="2025-12-13T00:22:34.536337313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 578.280624ms" Dec 13 00:22:34.536398 containerd[1654]: time="2025-12-13T00:22:34.536387136Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 13 00:22:34.537124 containerd[1654]: time="2025-12-13T00:22:34.537077468Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 13 00:22:38.535137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208429874.mount: Deactivated successfully. Dec 13 00:22:39.972908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 00:22:39.975790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:22:40.260437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:40.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:40.265279 kernel: audit: type=1130 audit(1765585360.259:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:40.280920 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:22:40.463498 kubelet[2332]: E1213 00:22:40.463437 2332 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:22:40.467847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:22:40.468038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:22:40.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:22:40.468731 systemd[1]: kubelet.service: Consumed 413ms CPU time, 110.8M memory peak. Dec 13 00:22:40.474269 kernel: audit: type=1131 audit(1765585360.467:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:22:41.971415 containerd[1654]: time="2025-12-13T00:22:41.971342797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:41.972283 containerd[1654]: time="2025-12-13T00:22:41.972257695Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74159920" Dec 13 00:22:41.973621 containerd[1654]: time="2025-12-13T00:22:41.973553036Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:41.976950 containerd[1654]: time="2025-12-13T00:22:41.976893581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:41.977930 containerd[1654]: time="2025-12-13T00:22:41.977873573Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 7.440761716s" Dec 13 00:22:41.977930 containerd[1654]: time="2025-12-13T00:22:41.977907156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 13 00:22:45.649526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:45.649727 systemd[1]: kubelet.service: Consumed 413ms CPU time, 110.8M memory peak. Dec 13 00:22:45.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:45.652348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:22:45.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:45.659329 kernel: audit: type=1130 audit(1765585365.648:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:45.659425 kernel: audit: type=1131 audit(1765585365.648:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:45.785418 systemd[1]: Reload requested from client PID 2372 ('systemctl') (unit session-8.scope)... Dec 13 00:22:45.785441 systemd[1]: Reloading... Dec 13 00:22:45.902276 zram_generator::config[2427]: No configuration found. Dec 13 00:22:46.885431 systemd[1]: Reloading finished in 1099 ms. Dec 13 00:22:46.918000 audit: BPF prog-id=67 op=LOAD Dec 13 00:22:46.921280 kernel: audit: type=1334 audit(1765585366.918:298): prog-id=67 op=LOAD Dec 13 00:22:46.921355 kernel: audit: type=1334 audit(1765585366.918:299): prog-id=57 op=UNLOAD Dec 13 00:22:46.918000 audit: BPF prog-id=57 op=UNLOAD Dec 13 00:22:46.918000 audit: BPF prog-id=68 op=LOAD Dec 13 00:22:46.924248 kernel: audit: type=1334 audit(1765585366.918:300): prog-id=68 op=LOAD Dec 13 00:22:46.924299 kernel: audit: type=1334 audit(1765585366.918:301): prog-id=69 op=LOAD Dec 13 00:22:46.918000 audit: BPF prog-id=69 op=LOAD Dec 13 00:22:46.918000 audit: BPF prog-id=58 op=UNLOAD Dec 13 00:22:46.918000 audit: BPF prog-id=59 op=UNLOAD Dec 13 00:22:46.919000 audit: BPF prog-id=70 op=LOAD Dec 13 00:22:46.919000 audit: BPF prog-id=56 op=UNLOAD Dec 13 00:22:46.922000 audit: BPF prog-id=71 op=LOAD Dec 13 00:22:46.922000 audit: BPF prog-id=53 op=UNLOAD Dec 13 00:22:46.923000 audit: BPF prog-id=72 op=LOAD Dec 13 00:22:46.923000 audit: BPF prog-id=60 op=UNLOAD Dec 13 00:22:46.923000 audit: BPF prog-id=73 op=LOAD Dec 13 00:22:46.923000 audit: BPF prog-id=74 op=LOAD Dec 13 00:22:46.923000 audit: BPF prog-id=61 op=UNLOAD Dec 13 00:22:46.923000 audit: BPF prog-id=62 op=UNLOAD Dec 13 00:22:46.926296 kernel: audit: type=1334 audit(1765585366.918:302): prog-id=58 op=UNLOAD Dec 13 00:22:46.926361 kernel: audit: type=1334 audit(1765585366.918:303): prog-id=59 op=UNLOAD Dec 13 00:22:46.926390 kernel: audit: type=1334 audit(1765585366.919:304): prog-id=70 op=LOAD Dec 13 00:22:46.926416 kernel: audit: type=1334 audit(1765585366.919:305): prog-id=56 op=UNLOAD Dec 13 00:22:46.925000 audit: BPF prog-id=75 op=LOAD Dec 13 00:22:46.925000 audit: BPF prog-id=47 op=UNLOAD Dec 13 00:22:46.925000 audit: BPF prog-id=76 op=LOAD Dec 13 00:22:46.925000 audit: BPF prog-id=77 op=LOAD Dec 13 00:22:46.925000 audit: BPF prog-id=48 op=UNLOAD Dec 13 00:22:46.925000 audit: BPF prog-id=49 op=UNLOAD Dec 13 00:22:46.928000 audit: BPF prog-id=78 op=LOAD Dec 13 00:22:46.928000 audit: BPF prog-id=64 op=UNLOAD Dec 13 00:22:46.928000 audit: BPF prog-id=79 op=LOAD Dec 13 00:22:46.928000 audit: BPF prog-id=80 op=LOAD Dec 13 00:22:46.929000 audit: BPF prog-id=65 op=UNLOAD Dec 13 00:22:46.929000 audit: BPF prog-id=66 op=UNLOAD Dec 13 00:22:46.930000 audit: BPF prog-id=81 op=LOAD Dec 13 00:22:46.930000 audit: BPF prog-id=63 op=UNLOAD Dec 13 00:22:46.955000 audit: BPF prog-id=82 op=LOAD Dec 13 00:22:46.955000 audit: BPF prog-id=83 op=LOAD Dec 13 00:22:46.955000 audit: BPF prog-id=54 op=UNLOAD Dec 13 00:22:46.955000 audit: BPF prog-id=55 op=UNLOAD Dec 13 00:22:46.955000 audit: BPF prog-id=84 op=LOAD Dec 13 00:22:46.955000 audit: BPF prog-id=50 op=UNLOAD Dec 13 00:22:46.956000 audit: BPF prog-id=85 op=LOAD Dec 13 00:22:46.956000 audit: BPF prog-id=86 op=LOAD Dec 13 00:22:46.956000 audit: BPF prog-id=51 op=UNLOAD Dec 13 00:22:46.956000 audit: BPF prog-id=52 op=UNLOAD Dec 13 00:22:46.985672 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 00:22:46.985857 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 00:22:46.986411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:46.986514 systemd[1]: kubelet.service: Consumed 201ms CPU time, 98.4M memory peak. Dec 13 00:22:46.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:22:46.989058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:22:47.337845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:47.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:47.362824 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:22:47.438979 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:22:47.439817 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:22:47.439817 kubelet[2466]: I1213 00:22:47.439633 2466 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:22:48.254266 kubelet[2466]: I1213 00:22:48.254174 2466 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 13 00:22:48.254266 kubelet[2466]: I1213 00:22:48.254214 2466 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:22:48.256747 kubelet[2466]: I1213 00:22:48.256685 2466 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 13 00:22:48.256807 kubelet[2466]: I1213 00:22:48.256787 2466 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:22:48.257532 kubelet[2466]: I1213 00:22:48.257492 2466 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 00:22:48.500215 kubelet[2466]: E1213 00:22:48.500109 2466 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 13 00:22:48.500881 kubelet[2466]: I1213 00:22:48.500432 2466 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:22:48.506166 kubelet[2466]: I1213 00:22:48.506045 2466 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:22:48.513364 kubelet[2466]: I1213 00:22:48.513297 2466 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 13 00:22:48.514822 kubelet[2466]: I1213 00:22:48.514766 2466 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:22:48.515014 kubelet[2466]: I1213 00:22:48.514800 2466 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:22:48.515014 kubelet[2466]: I1213 00:22:48.514989 2466 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:22:48.515014 kubelet[2466]: I1213 00:22:48.515001 2466 container_manager_linux.go:306] "Creating device plugin manager" Dec 13 00:22:48.515447 kubelet[2466]: I1213 00:22:48.515129 2466 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 13 00:22:48.522000 kubelet[2466]: I1213 00:22:48.521923 2466 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:22:48.522298 kubelet[2466]: I1213 00:22:48.522264 2466 kubelet.go:475] "Attempting to sync node with API server" Dec 13 00:22:48.522298 kubelet[2466]: I1213 00:22:48.522292 2466 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:22:48.522433 kubelet[2466]: I1213 00:22:48.522324 2466 kubelet.go:387] "Adding apiserver pod source" Dec 13 00:22:48.522433 kubelet[2466]: I1213 00:22:48.522345 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:22:48.523294 kubelet[2466]: E1213 00:22:48.523178 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 13 00:22:48.523294 kubelet[2466]: E1213 00:22:48.523258 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 00:22:48.528953 kubelet[2466]: I1213 00:22:48.528891 2466 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:22:48.529921 kubelet[2466]: I1213 00:22:48.529874 2466 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 00:22:48.529921 kubelet[2466]: I1213 00:22:48.529922 2466 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 13 00:22:48.530009 kubelet[2466]: W1213 00:22:48.529984 2466 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 00:22:48.535655 kubelet[2466]: I1213 00:22:48.535612 2466 server.go:1262] "Started kubelet" Dec 13 00:22:48.536161 kubelet[2466]: I1213 00:22:48.535919 2466 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:22:48.536161 kubelet[2466]: I1213 00:22:48.535962 2466 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 13 00:22:48.536161 kubelet[2466]: I1213 00:22:48.535976 2466 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:22:48.536844 kubelet[2466]: I1213 00:22:48.536802 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:22:48.537271 kubelet[2466]: I1213 00:22:48.537162 2466 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:22:48.540635 kubelet[2466]: E1213 00:22:48.539388 2466 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18809e89c830b49a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:22:48.535577754 +0000 UTC m=+1.167214694,LastTimestamp:2025-12-13 00:22:48.535577754 +0000 UTC m=+1.167214694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:22:48.540834 kubelet[2466]: I1213 00:22:48.540732 2466 server.go:310] "Adding debug handlers to kubelet server" Dec 13 00:22:48.541275 kubelet[2466]: E1213 00:22:48.541148 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:22:48.542335 kubelet[2466]: I1213 00:22:48.541335 2466 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 13 00:22:48.542335 kubelet[2466]: I1213 00:22:48.541588 2466 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 00:22:48.542335 kubelet[2466]: I1213 00:22:48.541646 2466 reconciler.go:29] "Reconciler: start to sync state" Dec 13 00:22:48.542335 kubelet[2466]: E1213 00:22:48.541828 2466 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:22:48.542335 kubelet[2466]: I1213 00:22:48.541927 2466 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:22:48.542335 kubelet[2466]: E1213 00:22:48.542174 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 13 00:22:48.543000 kubelet[2466]: E1213 00:22:48.542941 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Dec 13 00:22:48.544090 kubelet[2466]: I1213 00:22:48.544059 2466 factory.go:223] Registration of the systemd container factory successfully Dec 13 00:22:48.544209 kubelet[2466]: I1213 00:22:48.544171 2466 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:22:48.546070 kubelet[2466]: I1213 00:22:48.546047 2466 factory.go:223] Registration of the containerd container factory successfully Dec 13 00:22:48.544000 audit[2483]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:48.544000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffed607fef0 a2=0 a3=0 items=0 ppid=2466 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:22:48.546000 audit[2484]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:48.546000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff88d83790 a2=0 a3=0 items=0 ppid=2466 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:22:48.549000 audit[2486]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:48.549000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd13081710 a2=0 a3=0 items=0 ppid=2466 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:22:48.552000 audit[2488]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:48.552000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdc8a7c3a0 a2=0 a3=0 items=0 ppid=2466 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:22:48.567000 audit[2493]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:48.567000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffeec1d310 a2=0 a3=0 items=0 ppid=2466 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 13 00:22:48.569349 kubelet[2466]: I1213 00:22:48.568912 2466 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 13 00:22:48.569000 audit[2494]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:48.569000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeba70eb20 a2=0 a3=0 items=0 ppid=2466 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:22:48.570919 kubelet[2466]: I1213 00:22:48.570900 2466 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 13 00:22:48.570987 kubelet[2466]: I1213 00:22:48.570977 2466 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 13 00:22:48.571058 kubelet[2466]: I1213 00:22:48.571049 2466 kubelet.go:2427] "Starting kubelet main sync loop" Dec 13 00:22:48.571154 kubelet[2466]: E1213 00:22:48.571137 2466 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:22:48.571000 audit[2495]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:48.571000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff97a16bc0 a2=0 a3=0 items=0 ppid=2466 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 00:22:48.572000 audit[2496]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:48.572000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeebe6e320 a2=0 a3=0 items=0 ppid=2466 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 00:22:48.574000 audit[2497]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:22:48.574000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9971a2f0 a2=0 a3=0 items=0 ppid=2466 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 00:22:48.575000 audit[2498]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:48.575000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcef3be340 a2=0 a3=0 items=0 ppid=2466 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 00:22:48.577000 audit[2499]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:48.577000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9545e660 a2=0 a3=0 items=0 ppid=2466 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 00:22:48.579000 audit[2500]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:22:48.579000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd5690720 a2=0 a3=0 items=0 ppid=2466 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:48.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 00:22:48.581626 kubelet[2466]: E1213 00:22:48.581594 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 13 00:22:48.619709 kubelet[2466]: I1213 00:22:48.619643 2466 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:22:48.619709 kubelet[2466]: I1213 00:22:48.619665 2466 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:22:48.619709 kubelet[2466]: I1213 00:22:48.619697 2466 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:22:48.623128 kubelet[2466]: I1213 00:22:48.623080 2466 policy_none.go:49] "None policy: Start" Dec 13 00:22:48.623128 kubelet[2466]: I1213 00:22:48.623103 2466 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 13 00:22:48.623128 kubelet[2466]: I1213 00:22:48.623117 2466 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 13 00:22:48.625595 kubelet[2466]: I1213 00:22:48.625560 2466 policy_none.go:47] "Start" Dec 13 00:22:48.634119 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 00:22:48.642154 kubelet[2466]: E1213 00:22:48.642105 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:22:48.650049 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 00:22:48.654443 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 00:22:48.666108 kubelet[2466]: E1213 00:22:48.666066 2466 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 00:22:48.666612 kubelet[2466]: I1213 00:22:48.666376 2466 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:22:48.666612 kubelet[2466]: I1213 00:22:48.666395 2466 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:22:48.666759 kubelet[2466]: I1213 00:22:48.666744 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:22:48.667731 kubelet[2466]: E1213 00:22:48.667696 2466 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:22:48.667731 kubelet[2466]: E1213 00:22:48.667749 2466 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 00:22:48.687273 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 13 00:22:48.706014 kubelet[2466]: E1213 00:22:48.705943 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:48.711121 systemd[1]: Created slice kubepods-burstable-pod6ba1daa93dcb2c782ae7b6adc97d57eb.slice - libcontainer container kubepods-burstable-pod6ba1daa93dcb2c782ae7b6adc97d57eb.slice. Dec 13 00:22:48.715871 kubelet[2466]: E1213 00:22:48.715542 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:48.718176 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 13 00:22:48.720575 kubelet[2466]: E1213 00:22:48.720535 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:48.744191 kubelet[2466]: E1213 00:22:48.744130 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Dec 13 00:22:48.768914 kubelet[2466]: I1213 00:22:48.768770 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:22:48.769342 kubelet[2466]: E1213 00:22:48.769292 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Dec 13 00:22:48.843711 kubelet[2466]: I1213 00:22:48.843640 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:22:48.843711 kubelet[2466]: I1213 00:22:48.843679 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ba1daa93dcb2c782ae7b6adc97d57eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ba1daa93dcb2c782ae7b6adc97d57eb\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:48.843711 kubelet[2466]: I1213 00:22:48.843713 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ba1daa93dcb2c782ae7b6adc97d57eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ba1daa93dcb2c782ae7b6adc97d57eb\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:48.843711 kubelet[2466]: I1213 00:22:48.843728 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:48.843711 kubelet[2466]: I1213 00:22:48.843743 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:48.844051 kubelet[2466]: I1213 00:22:48.843761 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ba1daa93dcb2c782ae7b6adc97d57eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ba1daa93dcb2c782ae7b6adc97d57eb\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:48.844051 kubelet[2466]: I1213 00:22:48.843777 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:48.844051 kubelet[2466]: I1213 00:22:48.843794 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:48.844051 kubelet[2466]: I1213 00:22:48.843811 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:48.971698 kubelet[2466]: I1213 00:22:48.971647 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:22:48.972010 kubelet[2466]: E1213 00:22:48.971980 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Dec 13 00:22:49.112476 kubelet[2466]: E1213 00:22:49.112335 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:49.113705 containerd[1654]: time="2025-12-13T00:22:49.113633163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 13 00:22:49.145658 kubelet[2466]: E1213 00:22:49.145595 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Dec 13 00:22:49.201630 kubelet[2466]: E1213 00:22:49.201565 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:49.202165 containerd[1654]: time="2025-12-13T00:22:49.202111201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ba1daa93dcb2c782ae7b6adc97d57eb,Namespace:kube-system,Attempt:0,}" Dec 13 00:22:49.253782 kubelet[2466]: E1213 00:22:49.253721 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:49.254339 containerd[1654]: time="2025-12-13T00:22:49.254283968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 13 00:22:49.374729 kubelet[2466]: I1213 00:22:49.374567 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:22:49.375050 kubelet[2466]: E1213 00:22:49.375010 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Dec 13 00:22:49.403021 kubelet[2466]: E1213 00:22:49.402936 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 13 00:22:49.458724 kubelet[2466]: E1213 00:22:49.457658 2466 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18809e89c830b49a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:22:48.535577754 +0000 UTC m=+1.167214694,LastTimestamp:2025-12-13 00:22:48.535577754 +0000 UTC m=+1.167214694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:22:49.615503 kubelet[2466]: E1213 00:22:49.615434 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 00:22:49.817586 kubelet[2466]: E1213 00:22:49.817524 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 13 00:22:49.947098 kubelet[2466]: E1213 00:22:49.947022 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="1.6s" Dec 13 00:22:49.968981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231846168.mount: Deactivated successfully. Dec 13 00:22:49.973630 containerd[1654]: time="2025-12-13T00:22:49.973573111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:22:49.975702 containerd[1654]: time="2025-12-13T00:22:49.975629740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 00:22:49.981707 containerd[1654]: time="2025-12-13T00:22:49.979835162Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:22:49.981707 containerd[1654]: time="2025-12-13T00:22:49.981048406Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:22:49.982404 containerd[1654]: time="2025-12-13T00:22:49.982378090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:22:49.982480 containerd[1654]: time="2025-12-13T00:22:49.982449555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 00:22:49.983510 containerd[1654]: time="2025-12-13T00:22:49.983480054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 00:22:49.984773 containerd[1654]: time="2025-12-13T00:22:49.984717594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:22:49.985325 containerd[1654]: time="2025-12-13T00:22:49.985294675Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 729.639855ms" Dec 13 00:22:49.988038 containerd[1654]: time="2025-12-13T00:22:49.988010951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 723.446052ms" Dec 13 00:22:49.994638 containerd[1654]: time="2025-12-13T00:22:49.994560297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 736.757354ms" Dec 13 00:22:50.067956 kubelet[2466]: E1213 00:22:50.067814 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 13 00:22:50.072225 containerd[1654]: time="2025-12-13T00:22:50.072171291Z" level=info msg="connecting to shim 4dbcc416840c76b83ed3a6d3f3dd9628e482a08c3530c0b8521dda4f1c784383" address="unix:///run/containerd/s/265edfdfe8a14e9ccf717c5fdbab0905276abdc9efe8bbe9885876fce3cdfa57" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:50.072779 containerd[1654]: time="2025-12-13T00:22:50.072729526Z" level=info msg="connecting to shim 83bfd0b83fa5cba5b1c7a3e7b3d3d98dae4d56114c594f3c4e4239d3b6b00b1a" address="unix:///run/containerd/s/2c10c1d0a6354f23f68d5ef7ccac2c3fdb9292a948402dc50f7d162c15bf2796" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:50.073091 containerd[1654]: time="2025-12-13T00:22:50.072989577Z" level=info msg="connecting to shim 2f0f350f0d20f44ccd6882f3f6e04741d29b8c9d3b8e4f321d6a5bfa54d7cb4b" address="unix:///run/containerd/s/c1c28f5fd3a371a327dd4c0dfbfbdbe7aceb946cbdd04f05a10aeb0e1d204463" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:50.165418 systemd[1]: Started cri-containerd-4dbcc416840c76b83ed3a6d3f3dd9628e482a08c3530c0b8521dda4f1c784383.scope - libcontainer container 4dbcc416840c76b83ed3a6d3f3dd9628e482a08c3530c0b8521dda4f1c784383. Dec 13 00:22:50.171030 systemd[1]: Started cri-containerd-2f0f350f0d20f44ccd6882f3f6e04741d29b8c9d3b8e4f321d6a5bfa54d7cb4b.scope - libcontainer container 2f0f350f0d20f44ccd6882f3f6e04741d29b8c9d3b8e4f321d6a5bfa54d7cb4b. Dec 13 00:22:50.173359 systemd[1]: Started cri-containerd-83bfd0b83fa5cba5b1c7a3e7b3d3d98dae4d56114c594f3c4e4239d3b6b00b1a.scope - libcontainer container 83bfd0b83fa5cba5b1c7a3e7b3d3d98dae4d56114c594f3c4e4239d3b6b00b1a. Dec 13 00:22:50.177329 kubelet[2466]: I1213 00:22:50.177294 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:22:50.177626 kubelet[2466]: E1213 00:22:50.177593 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Dec 13 00:22:50.181000 audit: BPF prog-id=87 op=LOAD Dec 13 00:22:50.182000 audit: BPF prog-id=88 op=LOAD Dec 13 00:22:50.182000 audit[2560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2533 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626363343136383430633736623833656433613664336633646439 Dec 13 00:22:50.182000 audit: BPF prog-id=88 op=UNLOAD Dec 13 00:22:50.182000 audit[2560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626363343136383430633736623833656433613664336633646439 Dec 13 00:22:50.182000 audit: BPF prog-id=89 op=LOAD Dec 13 00:22:50.182000 audit[2560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2533 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626363343136383430633736623833656433613664336633646439 Dec 13 00:22:50.182000 audit: BPF prog-id=90 op=LOAD Dec 13 00:22:50.182000 audit[2560]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2533 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626363343136383430633736623833656433613664336633646439 Dec 13 00:22:50.182000 audit: BPF prog-id=90 op=UNLOAD Dec 13 00:22:50.182000 audit[2560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626363343136383430633736623833656433613664336633646439 Dec 13 00:22:50.182000 audit: BPF prog-id=89 op=UNLOAD Dec 13 00:22:50.182000 audit[2560]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626363343136383430633736623833656433613664336633646439 Dec 13 00:22:50.182000 audit: BPF prog-id=91 op=LOAD Dec 13 00:22:50.182000 audit[2560]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2533 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626363343136383430633736623833656433613664336633646439 Dec 13 00:22:50.191000 audit: BPF prog-id=92 op=LOAD Dec 13 00:22:50.193000 audit: BPF prog-id=93 op=LOAD Dec 13 00:22:50.193000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2532 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.193000 audit: BPF prog-id=94 op=LOAD Dec 13 00:22:50.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833626664306238336661356362613562316337613365376233643364 Dec 13 00:22:50.193000 audit: BPF prog-id=93 op=UNLOAD Dec 13 00:22:50.193000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2532 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833626664306238336661356362613562316337613365376233643364 Dec 13 00:22:50.194000 audit: BPF prog-id=95 op=LOAD Dec 13 00:22:50.194000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2532 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833626664306238336661356362613562316337613365376233643364 Dec 13 00:22:50.194000 audit: BPF prog-id=96 op=LOAD Dec 13 00:22:50.194000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2535 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306633353066306432306634346363643638383266336636653034 Dec 13 00:22:50.194000 audit: BPF prog-id=96 op=UNLOAD Dec 13 00:22:50.194000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306633353066306432306634346363643638383266336636653034 Dec 13 00:22:50.194000 audit: BPF prog-id=97 op=LOAD Dec 13 00:22:50.194000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2535 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306633353066306432306634346363643638383266336636653034 Dec 13 00:22:50.194000 audit: BPF prog-id=98 op=LOAD Dec 13 00:22:50.194000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2535 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306633353066306432306634346363643638383266336636653034 Dec 13 00:22:50.194000 audit: BPF prog-id=98 op=UNLOAD Dec 13 00:22:50.194000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306633353066306432306634346363643638383266336636653034 Dec 13 00:22:50.194000 audit: BPF prog-id=97 op=UNLOAD Dec 13 00:22:50.194000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306633353066306432306634346363643638383266336636653034 Dec 13 00:22:50.194000 audit: BPF prog-id=99 op=LOAD Dec 13 00:22:50.194000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2532 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833626664306238336661356362613562316337613365376233643364 Dec 13 00:22:50.194000 audit: BPF prog-id=99 op=UNLOAD Dec 13 00:22:50.194000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2532 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833626664306238336661356362613562316337613365376233643364 Dec 13 00:22:50.194000 audit: BPF prog-id=100 op=LOAD Dec 13 00:22:50.194000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2535 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: BPF prog-id=95 op=UNLOAD Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306633353066306432306634346363643638383266336636653034 Dec 13 00:22:50.194000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2532 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833626664306238336661356362613562316337613365376233643364 Dec 13 00:22:50.194000 audit: BPF prog-id=101 op=LOAD Dec 13 00:22:50.194000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2532 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833626664306238336661356362613562316337613365376233643364 Dec 13 00:22:50.292018 containerd[1654]: time="2025-12-13T00:22:50.291964428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ba1daa93dcb2c782ae7b6adc97d57eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"83bfd0b83fa5cba5b1c7a3e7b3d3d98dae4d56114c594f3c4e4239d3b6b00b1a\"" Dec 13 00:22:50.293643 kubelet[2466]: E1213 00:22:50.293618 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:50.303143 containerd[1654]: time="2025-12-13T00:22:50.303083966Z" level=info msg="CreateContainer within sandbox \"83bfd0b83fa5cba5b1c7a3e7b3d3d98dae4d56114c594f3c4e4239d3b6b00b1a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 00:22:50.307755 containerd[1654]: time="2025-12-13T00:22:50.307714909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dbcc416840c76b83ed3a6d3f3dd9628e482a08c3530c0b8521dda4f1c784383\"" Dec 13 00:22:50.308911 kubelet[2466]: E1213 00:22:50.308741 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:50.314658 containerd[1654]: time="2025-12-13T00:22:50.314610212Z" level=info msg="CreateContainer within sandbox \"4dbcc416840c76b83ed3a6d3f3dd9628e482a08c3530c0b8521dda4f1c784383\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 00:22:50.315935 containerd[1654]: time="2025-12-13T00:22:50.315894641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f0f350f0d20f44ccd6882f3f6e04741d29b8c9d3b8e4f321d6a5bfa54d7cb4b\"" Dec 13 00:22:50.316190 containerd[1654]: time="2025-12-13T00:22:50.316150334Z" level=info msg="Container 1321587b2f4aff8b86aa4f8b470c867bf896c3d3fd5cacbf51a57b73534afa33: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:22:50.317677 kubelet[2466]: E1213 00:22:50.317637 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:50.325276 containerd[1654]: time="2025-12-13T00:22:50.325119708Z" level=info msg="CreateContainer within sandbox \"2f0f350f0d20f44ccd6882f3f6e04741d29b8c9d3b8e4f321d6a5bfa54d7cb4b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 00:22:50.328340 containerd[1654]: time="2025-12-13T00:22:50.328307684Z" level=info msg="Container 88888942f13d845c11247c10c5b473cb3af5d6935f5af8c2a5357347594a92df: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:22:50.333157 containerd[1654]: time="2025-12-13T00:22:50.333102517Z" level=info msg="CreateContainer within sandbox \"83bfd0b83fa5cba5b1c7a3e7b3d3d98dae4d56114c594f3c4e4239d3b6b00b1a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1321587b2f4aff8b86aa4f8b470c867bf896c3d3fd5cacbf51a57b73534afa33\"" Dec 13 00:22:50.333833 containerd[1654]: time="2025-12-13T00:22:50.333800726Z" level=info msg="StartContainer for \"1321587b2f4aff8b86aa4f8b470c867bf896c3d3fd5cacbf51a57b73534afa33\"" Dec 13 00:22:50.334969 containerd[1654]: time="2025-12-13T00:22:50.334931413Z" level=info msg="connecting to shim 1321587b2f4aff8b86aa4f8b470c867bf896c3d3fd5cacbf51a57b73534afa33" address="unix:///run/containerd/s/2c10c1d0a6354f23f68d5ef7ccac2c3fdb9292a948402dc50f7d162c15bf2796" protocol=ttrpc version=3 Dec 13 00:22:50.336737 containerd[1654]: time="2025-12-13T00:22:50.336687052Z" level=info msg="CreateContainer within sandbox \"4dbcc416840c76b83ed3a6d3f3dd9628e482a08c3530c0b8521dda4f1c784383\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"88888942f13d845c11247c10c5b473cb3af5d6935f5af8c2a5357347594a92df\"" Dec 13 00:22:50.337839 containerd[1654]: time="2025-12-13T00:22:50.337779557Z" level=info msg="StartContainer for \"88888942f13d845c11247c10c5b473cb3af5d6935f5af8c2a5357347594a92df\"" Dec 13 00:22:50.339486 containerd[1654]: time="2025-12-13T00:22:50.339441999Z" level=info msg="connecting to shim 88888942f13d845c11247c10c5b473cb3af5d6935f5af8c2a5357347594a92df" address="unix:///run/containerd/s/265edfdfe8a14e9ccf717c5fdbab0905276abdc9efe8bbe9885876fce3cdfa57" protocol=ttrpc version=3 Dec 13 00:22:50.342591 containerd[1654]: time="2025-12-13T00:22:50.342405982Z" level=info msg="Container 8ad59eccc2d7a3d88fa1dfef5fcf01a3158a1d70c6b73dbe2931b9a6070dd30e: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:22:50.355547 systemd[1]: Started cri-containerd-1321587b2f4aff8b86aa4f8b470c867bf896c3d3fd5cacbf51a57b73534afa33.scope - libcontainer container 1321587b2f4aff8b86aa4f8b470c867bf896c3d3fd5cacbf51a57b73534afa33. Dec 13 00:22:50.362354 containerd[1654]: time="2025-12-13T00:22:50.361952818Z" level=info msg="CreateContainer within sandbox \"2f0f350f0d20f44ccd6882f3f6e04741d29b8c9d3b8e4f321d6a5bfa54d7cb4b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ad59eccc2d7a3d88fa1dfef5fcf01a3158a1d70c6b73dbe2931b9a6070dd30e\"" Dec 13 00:22:50.362938 containerd[1654]: time="2025-12-13T00:22:50.362908345Z" level=info msg="StartContainer for \"8ad59eccc2d7a3d88fa1dfef5fcf01a3158a1d70c6b73dbe2931b9a6070dd30e\"" Dec 13 00:22:50.364990 containerd[1654]: time="2025-12-13T00:22:50.364955975Z" level=info msg="connecting to shim 8ad59eccc2d7a3d88fa1dfef5fcf01a3158a1d70c6b73dbe2931b9a6070dd30e" address="unix:///run/containerd/s/c1c28f5fd3a371a327dd4c0dfbfbdbe7aceb946cbdd04f05a10aeb0e1d204463" protocol=ttrpc version=3 Dec 13 00:22:50.371558 systemd[1]: Started cri-containerd-88888942f13d845c11247c10c5b473cb3af5d6935f5af8c2a5357347594a92df.scope - libcontainer container 88888942f13d845c11247c10c5b473cb3af5d6935f5af8c2a5357347594a92df. Dec 13 00:22:50.381000 audit: BPF prog-id=102 op=LOAD Dec 13 00:22:50.382000 audit: BPF prog-id=103 op=LOAD Dec 13 00:22:50.382000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2532 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323135383762326634616666386238366161346638623437306338 Dec 13 00:22:50.382000 audit: BPF prog-id=103 op=UNLOAD Dec 13 00:22:50.382000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2532 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323135383762326634616666386238366161346638623437306338 Dec 13 00:22:50.382000 audit: BPF prog-id=104 op=LOAD Dec 13 00:22:50.382000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2532 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323135383762326634616666386238366161346638623437306338 Dec 13 00:22:50.382000 audit: BPF prog-id=105 op=LOAD Dec 13 00:22:50.382000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2532 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323135383762326634616666386238366161346638623437306338 Dec 13 00:22:50.382000 audit: BPF prog-id=105 op=UNLOAD Dec 13 00:22:50.382000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2532 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323135383762326634616666386238366161346638623437306338 Dec 13 00:22:50.383000 audit: BPF prog-id=104 op=UNLOAD Dec 13 00:22:50.383000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2532 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323135383762326634616666386238366161346638623437306338 Dec 13 00:22:50.383000 audit: BPF prog-id=106 op=LOAD Dec 13 00:22:50.383000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2532 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133323135383762326634616666386238366161346638623437306338 Dec 13 00:22:50.391406 systemd[1]: Started cri-containerd-8ad59eccc2d7a3d88fa1dfef5fcf01a3158a1d70c6b73dbe2931b9a6070dd30e.scope - libcontainer container 8ad59eccc2d7a3d88fa1dfef5fcf01a3158a1d70c6b73dbe2931b9a6070dd30e. Dec 13 00:22:50.400000 audit: BPF prog-id=107 op=LOAD Dec 13 00:22:50.401000 audit: BPF prog-id=108 op=LOAD Dec 13 00:22:50.401000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2533 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383838393432663133643834356331313234376331306335623437 Dec 13 00:22:50.401000 audit: BPF prog-id=108 op=UNLOAD Dec 13 00:22:50.401000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383838393432663133643834356331313234376331306335623437 Dec 13 00:22:50.401000 audit: BPF prog-id=109 op=LOAD Dec 13 00:22:50.401000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2533 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383838393432663133643834356331313234376331306335623437 Dec 13 00:22:50.401000 audit: BPF prog-id=110 op=LOAD Dec 13 00:22:50.401000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2533 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383838393432663133643834356331313234376331306335623437 Dec 13 00:22:50.401000 audit: BPF prog-id=110 op=UNLOAD Dec 13 00:22:50.401000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383838393432663133643834356331313234376331306335623437 Dec 13 00:22:50.401000 audit: BPF prog-id=109 op=UNLOAD Dec 13 00:22:50.401000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383838393432663133643834356331313234376331306335623437 Dec 13 00:22:50.401000 audit: BPF prog-id=111 op=LOAD Dec 13 00:22:50.401000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2533 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383838393432663133643834356331313234376331306335623437 Dec 13 00:22:50.414000 audit: BPF prog-id=112 op=LOAD Dec 13 00:22:50.415000 audit: BPF prog-id=113 op=LOAD Dec 13 00:22:50.415000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2535 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643539656363633264376133643838666131646665663566636630 Dec 13 00:22:50.415000 audit: BPF prog-id=113 op=UNLOAD Dec 13 00:22:50.415000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643539656363633264376133643838666131646665663566636630 Dec 13 00:22:50.415000 audit: BPF prog-id=114 op=LOAD Dec 13 00:22:50.415000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2535 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643539656363633264376133643838666131646665663566636630 Dec 13 00:22:50.415000 audit: BPF prog-id=115 op=LOAD Dec 13 00:22:50.415000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2535 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643539656363633264376133643838666131646665663566636630 Dec 13 00:22:50.415000 audit: BPF prog-id=115 op=UNLOAD Dec 13 00:22:50.415000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643539656363633264376133643838666131646665663566636630 Dec 13 00:22:50.415000 audit: BPF prog-id=114 op=UNLOAD Dec 13 00:22:50.415000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643539656363633264376133643838666131646665663566636630 Dec 13 00:22:50.415000 audit: BPF prog-id=116 op=LOAD Dec 13 00:22:50.415000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2535 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:50.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861643539656363633264376133643838666131646665663566636630 Dec 13 00:22:50.447314 containerd[1654]: time="2025-12-13T00:22:50.446448921Z" level=info msg="StartContainer for \"1321587b2f4aff8b86aa4f8b470c867bf896c3d3fd5cacbf51a57b73534afa33\" returns successfully" Dec 13 00:22:50.473100 containerd[1654]: time="2025-12-13T00:22:50.472996932Z" level=info msg="StartContainer for \"8ad59eccc2d7a3d88fa1dfef5fcf01a3158a1d70c6b73dbe2931b9a6070dd30e\" returns successfully" Dec 13 00:22:50.473313 containerd[1654]: time="2025-12-13T00:22:50.473278765Z" level=info msg="StartContainer for \"88888942f13d845c11247c10c5b473cb3af5d6935f5af8c2a5357347594a92df\" returns successfully" Dec 13 00:22:50.525874 kubelet[2466]: E1213 00:22:50.525804 2466 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 13 00:22:50.603377 kubelet[2466]: E1213 00:22:50.601885 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:50.603739 kubelet[2466]: E1213 00:22:50.603718 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:50.614009 kubelet[2466]: E1213 00:22:50.613960 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:50.614148 kubelet[2466]: E1213 00:22:50.614098 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:50.622261 kubelet[2466]: E1213 00:22:50.621986 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:50.622261 kubelet[2466]: E1213 00:22:50.622133 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:51.623927 kubelet[2466]: E1213 00:22:51.623875 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:51.624719 kubelet[2466]: E1213 00:22:51.624122 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:51.624719 kubelet[2466]: E1213 00:22:51.624204 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:51.624719 kubelet[2466]: E1213 00:22:51.624461 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:51.780087 kubelet[2466]: I1213 00:22:51.780043 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:22:52.183565 kubelet[2466]: E1213 00:22:52.183475 2466 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 00:22:52.277430 kubelet[2466]: I1213 00:22:52.277318 2466 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:22:52.277430 kubelet[2466]: E1213 00:22:52.277384 2466 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 00:22:52.301023 kubelet[2466]: E1213 00:22:52.300910 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:22:52.313439 update_engine[1639]: I20251213 00:22:52.313324 1639 update_attempter.cc:509] Updating boot flags... Dec 13 00:22:52.401622 kubelet[2466]: E1213 00:22:52.401567 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:22:52.501913 kubelet[2466]: E1213 00:22:52.501855 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:22:52.602157 kubelet[2466]: E1213 00:22:52.602071 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:22:52.628910 kubelet[2466]: E1213 00:22:52.628876 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:22:52.630224 kubelet[2466]: E1213 00:22:52.630117 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:52.705274 kubelet[2466]: E1213 00:22:52.702605 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:22:52.803499 kubelet[2466]: E1213 00:22:52.803306 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:22:52.842697 kubelet[2466]: I1213 00:22:52.842612 2466 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:52.850185 kubelet[2466]: E1213 00:22:52.850100 2466 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:52.850185 kubelet[2466]: I1213 00:22:52.850142 2466 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:52.852472 kubelet[2466]: E1213 00:22:52.852436 2466 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:52.852472 kubelet[2466]: I1213 00:22:52.852461 2466 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:22:52.854311 kubelet[2466]: E1213 00:22:52.854276 2466 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 00:22:53.525944 kubelet[2466]: I1213 00:22:53.525900 2466 apiserver.go:52] "Watching apiserver" Dec 13 00:22:53.542355 kubelet[2466]: I1213 00:22:53.542305 2466 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 00:22:54.562725 kubelet[2466]: I1213 00:22:54.562662 2466 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:54.714528 kubelet[2466]: E1213 00:22:54.714476 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:55.631020 kubelet[2466]: E1213 00:22:55.630963 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:56.424893 kubelet[2466]: I1213 00:22:56.424851 2466 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:56.661896 kubelet[2466]: E1213 00:22:56.661841 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:57.093688 systemd[1]: Reload requested from client PID 2772 ('systemctl') (unit session-8.scope)... Dec 13 00:22:57.093712 systemd[1]: Reloading... Dec 13 00:22:57.198274 zram_generator::config[2824]: No configuration found. Dec 13 00:22:57.565155 systemd[1]: Reloading finished in 471 ms. Dec 13 00:22:57.597710 kubelet[2466]: I1213 00:22:57.597619 2466 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:22:57.597776 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:22:57.622002 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 00:22:57.622461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:57.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:57.622541 systemd[1]: kubelet.service: Consumed 1.783s CPU time, 125.2M memory peak. Dec 13 00:22:57.623701 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 13 00:22:57.623763 kernel: audit: type=1131 audit(1765585377.621:400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:57.624766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:22:57.629459 kernel: audit: type=1334 audit(1765585377.624:401): prog-id=117 op=LOAD Dec 13 00:22:57.624000 audit: BPF prog-id=117 op=LOAD Dec 13 00:22:57.624000 audit: BPF prog-id=118 op=LOAD Dec 13 00:22:57.631113 kernel: audit: type=1334 audit(1765585377.624:402): prog-id=118 op=LOAD Dec 13 00:22:57.631161 kernel: audit: type=1334 audit(1765585377.624:403): prog-id=82 op=UNLOAD Dec 13 00:22:57.624000 audit: BPF prog-id=82 op=UNLOAD Dec 13 00:22:57.624000 audit: BPF prog-id=83 op=UNLOAD Dec 13 00:22:57.634016 kernel: audit: type=1334 audit(1765585377.624:404): prog-id=83 op=UNLOAD Dec 13 00:22:57.634053 kernel: audit: type=1334 audit(1765585377.625:405): prog-id=119 op=LOAD Dec 13 00:22:57.625000 audit: BPF prog-id=119 op=LOAD Dec 13 00:22:57.635434 kernel: audit: type=1334 audit(1765585377.625:406): prog-id=67 op=UNLOAD Dec 13 00:22:57.625000 audit: BPF prog-id=67 op=UNLOAD Dec 13 00:22:57.636920 kernel: audit: type=1334 audit(1765585377.626:407): prog-id=120 op=LOAD Dec 13 00:22:57.626000 audit: BPF prog-id=120 op=LOAD Dec 13 00:22:57.626000 audit: BPF prog-id=121 op=LOAD Dec 13 00:22:57.640043 kernel: audit: type=1334 audit(1765585377.626:408): prog-id=121 op=LOAD Dec 13 00:22:57.640085 kernel: audit: type=1334 audit(1765585377.626:409): prog-id=68 op=UNLOAD Dec 13 00:22:57.626000 audit: BPF prog-id=68 op=UNLOAD Dec 13 00:22:57.626000 audit: BPF prog-id=69 op=UNLOAD Dec 13 00:22:57.628000 audit: BPF prog-id=122 op=LOAD Dec 13 00:22:57.628000 audit: BPF prog-id=78 op=UNLOAD Dec 13 00:22:57.628000 audit: BPF prog-id=123 op=LOAD Dec 13 00:22:57.628000 audit: BPF prog-id=124 op=LOAD Dec 13 00:22:57.628000 audit: BPF prog-id=79 op=UNLOAD Dec 13 00:22:57.628000 audit: BPF prog-id=80 op=UNLOAD Dec 13 00:22:57.643000 audit: BPF prog-id=125 op=LOAD Dec 13 00:22:57.643000 audit: BPF prog-id=72 op=UNLOAD Dec 13 00:22:57.643000 audit: BPF prog-id=126 op=LOAD Dec 13 00:22:57.643000 audit: BPF prog-id=127 op=LOAD Dec 13 00:22:57.643000 audit: BPF prog-id=73 op=UNLOAD Dec 13 00:22:57.643000 audit: BPF prog-id=74 op=UNLOAD Dec 13 00:22:57.644000 audit: BPF prog-id=128 op=LOAD Dec 13 00:22:57.644000 audit: BPF prog-id=70 op=UNLOAD Dec 13 00:22:57.646000 audit: BPF prog-id=129 op=LOAD Dec 13 00:22:57.646000 audit: BPF prog-id=75 op=UNLOAD Dec 13 00:22:57.646000 audit: BPF prog-id=130 op=LOAD Dec 13 00:22:57.646000 audit: BPF prog-id=131 op=LOAD Dec 13 00:22:57.646000 audit: BPF prog-id=76 op=UNLOAD Dec 13 00:22:57.646000 audit: BPF prog-id=77 op=UNLOAD Dec 13 00:22:57.647000 audit: BPF prog-id=132 op=LOAD Dec 13 00:22:57.647000 audit: BPF prog-id=84 op=UNLOAD Dec 13 00:22:57.647000 audit: BPF prog-id=133 op=LOAD Dec 13 00:22:57.647000 audit: BPF prog-id=134 op=LOAD Dec 13 00:22:57.647000 audit: BPF prog-id=85 op=UNLOAD Dec 13 00:22:57.647000 audit: BPF prog-id=86 op=UNLOAD Dec 13 00:22:57.649000 audit: BPF prog-id=135 op=LOAD Dec 13 00:22:57.649000 audit: BPF prog-id=81 op=UNLOAD Dec 13 00:22:57.651000 audit: BPF prog-id=136 op=LOAD Dec 13 00:22:57.651000 audit: BPF prog-id=71 op=UNLOAD Dec 13 00:22:57.873448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:22:57.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:57.893776 (kubelet)[2863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:22:57.948978 kubelet[2863]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:22:57.948978 kubelet[2863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:22:57.948978 kubelet[2863]: I1213 00:22:57.948977 2863 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:22:57.959164 kubelet[2863]: I1213 00:22:57.959111 2863 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 13 00:22:57.959164 kubelet[2863]: I1213 00:22:57.959139 2863 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:22:57.959164 kubelet[2863]: I1213 00:22:57.959168 2863 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 13 00:22:57.959376 kubelet[2863]: I1213 00:22:57.959180 2863 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:22:57.959407 kubelet[2863]: I1213 00:22:57.959392 2863 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 00:22:57.960545 kubelet[2863]: I1213 00:22:57.960515 2863 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 13 00:22:57.962515 kubelet[2863]: I1213 00:22:57.962471 2863 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:22:57.968332 kubelet[2863]: I1213 00:22:57.968307 2863 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:22:57.977250 kubelet[2863]: I1213 00:22:57.977180 2863 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 13 00:22:57.977528 kubelet[2863]: I1213 00:22:57.977467 2863 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:22:57.977791 kubelet[2863]: I1213 00:22:57.977510 2863 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:22:57.977791 kubelet[2863]: I1213 00:22:57.977783 2863 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:22:57.977791 kubelet[2863]: I1213 00:22:57.977794 2863 container_manager_linux.go:306] "Creating device plugin manager" Dec 13 00:22:57.977935 kubelet[2863]: I1213 00:22:57.977820 2863 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 13 00:22:57.978702 kubelet[2863]: I1213 00:22:57.978665 2863 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:22:57.978882 kubelet[2863]: I1213 00:22:57.978850 2863 kubelet.go:475] "Attempting to sync node with API server" Dec 13 00:22:57.978922 kubelet[2863]: I1213 00:22:57.978886 2863 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:22:57.978922 kubelet[2863]: I1213 00:22:57.978914 2863 kubelet.go:387] "Adding apiserver pod source" Dec 13 00:22:57.978979 kubelet[2863]: I1213 00:22:57.978947 2863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:22:57.981497 kubelet[2863]: I1213 00:22:57.981471 2863 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:22:57.982160 kubelet[2863]: I1213 00:22:57.982114 2863 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 00:22:57.982294 kubelet[2863]: I1213 00:22:57.982272 2863 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 13 00:22:57.989446 kubelet[2863]: I1213 00:22:57.989422 2863 server.go:1262] "Started kubelet" Dec 13 00:22:57.991631 kubelet[2863]: I1213 00:22:57.990573 2863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:22:57.994661 kubelet[2863]: I1213 00:22:57.993471 2863 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 13 00:22:57.996366 kubelet[2863]: I1213 00:22:57.990729 2863 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:22:57.996606 kubelet[2863]: I1213 00:22:57.996576 2863 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 00:22:57.996606 kubelet[2863]: I1213 00:22:57.990848 2863 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:22:57.997910 kubelet[2863]: I1213 00:22:57.990893 2863 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:22:57.998424 kubelet[2863]: I1213 00:22:57.997916 2863 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 13 00:22:57.998551 kubelet[2863]: E1213 00:22:57.998525 2863 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:22:57.998611 kubelet[2863]: I1213 00:22:57.997826 2863 reconciler.go:29] "Reconciler: start to sync state" Dec 13 00:22:57.998611 kubelet[2863]: I1213 00:22:57.998599 2863 factory.go:223] Registration of the systemd container factory successfully Dec 13 00:22:57.998781 kubelet[2863]: I1213 00:22:57.998749 2863 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:22:57.999281 kubelet[2863]: I1213 00:22:57.997714 2863 server.go:310] "Adding debug handlers to kubelet server" Dec 13 00:22:58.000023 kubelet[2863]: I1213 00:22:57.999380 2863 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:22:58.000933 kubelet[2863]: I1213 00:22:58.000912 2863 factory.go:223] Registration of the containerd container factory successfully Dec 13 00:22:58.014290 kubelet[2863]: I1213 00:22:58.014124 2863 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 13 00:22:58.016534 kubelet[2863]: I1213 00:22:58.016330 2863 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 13 00:22:58.016534 kubelet[2863]: I1213 00:22:58.016362 2863 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 13 00:22:58.016534 kubelet[2863]: I1213 00:22:58.016393 2863 kubelet.go:2427] "Starting kubelet main sync loop" Dec 13 00:22:58.016534 kubelet[2863]: E1213 00:22:58.016451 2863 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:22:58.041349 kubelet[2863]: I1213 00:22:58.041286 2863 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:22:58.041349 kubelet[2863]: I1213 00:22:58.041308 2863 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:22:58.041349 kubelet[2863]: I1213 00:22:58.041330 2863 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:22:58.041583 kubelet[2863]: I1213 00:22:58.041456 2863 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 00:22:58.041583 kubelet[2863]: I1213 00:22:58.041465 2863 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 00:22:58.041583 kubelet[2863]: I1213 00:22:58.041487 2863 policy_none.go:49] "None policy: Start" Dec 13 00:22:58.041583 kubelet[2863]: I1213 00:22:58.041495 2863 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 13 00:22:58.041583 kubelet[2863]: I1213 00:22:58.041506 2863 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 13 00:22:58.041726 kubelet[2863]: I1213 00:22:58.041600 2863 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 13 00:22:58.041726 kubelet[2863]: I1213 00:22:58.041608 2863 policy_none.go:47] "Start" Dec 13 00:22:58.046532 kubelet[2863]: E1213 00:22:58.046474 2863 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 00:22:58.047024 kubelet[2863]: I1213 00:22:58.046713 2863 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:22:58.047024 kubelet[2863]: I1213 00:22:58.046733 2863 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:22:58.047024 kubelet[2863]: I1213 00:22:58.046956 2863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:22:58.047792 kubelet[2863]: E1213 00:22:58.047756 2863 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:22:58.117616 kubelet[2863]: I1213 00:22:58.117549 2863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:22:58.117616 kubelet[2863]: I1213 00:22:58.117598 2863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:58.117902 kubelet[2863]: I1213 00:22:58.117642 2863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:58.155079 kubelet[2863]: I1213 00:22:58.154872 2863 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:22:58.200013 kubelet[2863]: I1213 00:22:58.199923 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:58.200013 kubelet[2863]: I1213 00:22:58.199974 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ba1daa93dcb2c782ae7b6adc97d57eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ba1daa93dcb2c782ae7b6adc97d57eb\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:58.200013 kubelet[2863]: I1213 00:22:58.199997 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:58.200013 kubelet[2863]: I1213 00:22:58.200018 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:58.200384 kubelet[2863]: I1213 00:22:58.200043 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:22:58.200384 kubelet[2863]: I1213 00:22:58.200063 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ba1daa93dcb2c782ae7b6adc97d57eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ba1daa93dcb2c782ae7b6adc97d57eb\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:58.200384 kubelet[2863]: I1213 00:22:58.200088 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ba1daa93dcb2c782ae7b6adc97d57eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ba1daa93dcb2c782ae7b6adc97d57eb\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:58.200384 kubelet[2863]: I1213 00:22:58.200114 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:58.200384 kubelet[2863]: I1213 00:22:58.200137 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:58.258439 kubelet[2863]: E1213 00:22:58.258297 2863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:22:58.259216 kubelet[2863]: E1213 00:22:58.258305 2863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:58.259875 kubelet[2863]: I1213 00:22:58.259847 2863 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 13 00:22:58.259952 kubelet[2863]: I1213 00:22:58.259927 2863 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:22:58.559202 kubelet[2863]: E1213 00:22:58.558849 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:58.559202 kubelet[2863]: E1213 00:22:58.558907 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:58.559827 kubelet[2863]: E1213 00:22:58.559782 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:58.980497 kubelet[2863]: I1213 00:22:58.980449 2863 apiserver.go:52] "Watching apiserver" Dec 13 00:22:58.997146 kubelet[2863]: I1213 00:22:58.997021 2863 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 00:22:59.034434 kubelet[2863]: I1213 00:22:59.034166 2863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:22:59.034434 kubelet[2863]: E1213 00:22:59.034226 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:59.034960 kubelet[2863]: I1213 00:22:59.034940 2863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:59.052525 kubelet[2863]: E1213 00:22:59.052458 2863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:22:59.052834 kubelet[2863]: E1213 00:22:59.052777 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:59.053369 kubelet[2863]: E1213 00:22:59.053291 2863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 00:22:59.053811 kubelet[2863]: E1213 00:22:59.053747 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:59.186686 kubelet[2863]: I1213 00:22:59.186593 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.186521319 podStartE2EDuration="3.186521319s" podCreationTimestamp="2025-12-13 00:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:22:59.136157423 +0000 UTC m=+1.225429635" watchObservedRunningTime="2025-12-13 00:22:59.186521319 +0000 UTC m=+1.275793531" Dec 13 00:22:59.274367 kubelet[2863]: I1213 00:22:59.273112 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.273092001 podStartE2EDuration="1.273092001s" podCreationTimestamp="2025-12-13 00:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:22:59.186800915 +0000 UTC m=+1.276073127" watchObservedRunningTime="2025-12-13 00:22:59.273092001 +0000 UTC m=+1.362364213" Dec 13 00:22:59.299850 kubelet[2863]: I1213 00:22:59.299767 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.299748007 podStartE2EDuration="5.299748007s" podCreationTimestamp="2025-12-13 00:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:22:59.273552369 +0000 UTC m=+1.362824571" watchObservedRunningTime="2025-12-13 00:22:59.299748007 +0000 UTC m=+1.389020219" Dec 13 00:23:00.037738 kubelet[2863]: E1213 00:23:00.037660 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:00.038266 kubelet[2863]: E1213 00:23:00.037869 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:01.040018 kubelet[2863]: E1213 00:23:01.039949 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:01.819477 kubelet[2863]: E1213 00:23:01.819425 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:02.044845 kubelet[2863]: E1213 00:23:02.044752 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:02.228031 kubelet[2863]: I1213 00:23:02.227991 2863 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 00:23:02.228330 containerd[1654]: time="2025-12-13T00:23:02.228287908Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 00:23:02.228714 kubelet[2863]: I1213 00:23:02.228593 2863 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 00:23:02.806334 systemd[1]: Created slice kubepods-besteffort-pod0d1cad90_c164_4aae_8d4a_a4ec239597d4.slice - libcontainer container kubepods-besteffort-pod0d1cad90_c164_4aae_8d4a_a4ec239597d4.slice. Dec 13 00:23:02.831593 kubelet[2863]: I1213 00:23:02.831420 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d1cad90-c164-4aae-8d4a-a4ec239597d4-kube-proxy\") pod \"kube-proxy-5r7ng\" (UID: \"0d1cad90-c164-4aae-8d4a-a4ec239597d4\") " pod="kube-system/kube-proxy-5r7ng" Dec 13 00:23:02.831593 kubelet[2863]: I1213 00:23:02.831480 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d1cad90-c164-4aae-8d4a-a4ec239597d4-xtables-lock\") pod \"kube-proxy-5r7ng\" (UID: \"0d1cad90-c164-4aae-8d4a-a4ec239597d4\") " pod="kube-system/kube-proxy-5r7ng" Dec 13 00:23:02.831593 kubelet[2863]: I1213 00:23:02.831498 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d1cad90-c164-4aae-8d4a-a4ec239597d4-lib-modules\") pod \"kube-proxy-5r7ng\" (UID: \"0d1cad90-c164-4aae-8d4a-a4ec239597d4\") " pod="kube-system/kube-proxy-5r7ng" Dec 13 00:23:02.831593 kubelet[2863]: I1213 00:23:02.831519 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9482l\" (UniqueName: \"kubernetes.io/projected/0d1cad90-c164-4aae-8d4a-a4ec239597d4-kube-api-access-9482l\") pod \"kube-proxy-5r7ng\" (UID: \"0d1cad90-c164-4aae-8d4a-a4ec239597d4\") " pod="kube-system/kube-proxy-5r7ng" Dec 13 00:23:03.045707 kubelet[2863]: E1213 00:23:03.045647 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:03.083030 kubelet[2863]: E1213 00:23:03.082171 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:03.125929 kubelet[2863]: E1213 00:23:03.125877 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:03.126791 containerd[1654]: time="2025-12-13T00:23:03.126734706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5r7ng,Uid:0d1cad90-c164-4aae-8d4a-a4ec239597d4,Namespace:kube-system,Attempt:0,}" Dec 13 00:23:03.181749 containerd[1654]: time="2025-12-13T00:23:03.181672340Z" level=info msg="connecting to shim a1b1ca81166916e9ae334a0d6e65a4ae1d87686a80d458d1d0c0b265225976b3" address="unix:///run/containerd/s/6f0aaa3611723287fe322e59974922f067c8e3308952236d46720a6b7cd51d0a" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:03.220719 systemd[1]: Started cri-containerd-a1b1ca81166916e9ae334a0d6e65a4ae1d87686a80d458d1d0c0b265225976b3.scope - libcontainer container a1b1ca81166916e9ae334a0d6e65a4ae1d87686a80d458d1d0c0b265225976b3. Dec 13 00:23:03.236000 audit: BPF prog-id=137 op=LOAD Dec 13 00:23:03.238222 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 13 00:23:03.238330 kernel: audit: type=1334 audit(1765585383.236:442): prog-id=137 op=LOAD Dec 13 00:23:03.237000 audit: BPF prog-id=138 op=LOAD Dec 13 00:23:03.241522 kernel: audit: type=1334 audit(1765585383.237:443): prog-id=138 op=LOAD Dec 13 00:23:03.241577 kernel: audit: type=1300 audit(1765585383.237:443): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.237000 audit[2936]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.253371 kernel: audit: type=1327 audit(1765585383.237:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.255291 kernel: audit: type=1334 audit(1765585383.237:444): prog-id=138 op=UNLOAD Dec 13 00:23:03.237000 audit: BPF prog-id=138 op=UNLOAD Dec 13 00:23:03.261454 kernel: audit: type=1300 audit(1765585383.237:444): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.237000 audit[2936]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.269047 kernel: audit: type=1327 audit(1765585383.237:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.269123 kernel: audit: type=1334 audit(1765585383.237:445): prog-id=139 op=LOAD Dec 13 00:23:03.237000 audit: BPF prog-id=139 op=LOAD Dec 13 00:23:03.237000 audit[2936]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.275500 kernel: audit: type=1300 audit(1765585383.237:445): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.275625 kernel: audit: type=1327 audit(1765585383.237:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.277795 containerd[1654]: time="2025-12-13T00:23:03.277754046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5r7ng,Uid:0d1cad90-c164-4aae-8d4a-a4ec239597d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1b1ca81166916e9ae334a0d6e65a4ae1d87686a80d458d1d0c0b265225976b3\"" Dec 13 00:23:03.285708 kubelet[2863]: E1213 00:23:03.285611 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:03.237000 audit: BPF prog-id=140 op=LOAD Dec 13 00:23:03.237000 audit[2936]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.237000 audit: BPF prog-id=140 op=UNLOAD Dec 13 00:23:03.237000 audit[2936]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.237000 audit: BPF prog-id=139 op=UNLOAD Dec 13 00:23:03.237000 audit[2936]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.237000 audit: BPF prog-id=141 op=LOAD Dec 13 00:23:03.237000 audit[2936]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2926 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131623163613831313636393136653961653333346130643665363561 Dec 13 00:23:03.299008 containerd[1654]: time="2025-12-13T00:23:03.298947273Z" level=info msg="CreateContainer within sandbox \"a1b1ca81166916e9ae334a0d6e65a4ae1d87686a80d458d1d0c0b265225976b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 00:23:03.329585 containerd[1654]: time="2025-12-13T00:23:03.327916906Z" level=info msg="Container 8fc20e99c0c43fc7752c2737804a374ee6aa3cf5361d43fba72628b3be43bb75: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:23:03.345717 systemd[1]: Created slice kubepods-besteffort-podd8099e4f_f0a3_4cec_9ab6_2e09c3f7cabf.slice - libcontainer container kubepods-besteffort-podd8099e4f_f0a3_4cec_9ab6_2e09c3f7cabf.slice. Dec 13 00:23:03.351446 containerd[1654]: time="2025-12-13T00:23:03.351407640Z" level=info msg="CreateContainer within sandbox \"a1b1ca81166916e9ae334a0d6e65a4ae1d87686a80d458d1d0c0b265225976b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8fc20e99c0c43fc7752c2737804a374ee6aa3cf5361d43fba72628b3be43bb75\"" Dec 13 00:23:03.352170 containerd[1654]: time="2025-12-13T00:23:03.352146272Z" level=info msg="StartContainer for \"8fc20e99c0c43fc7752c2737804a374ee6aa3cf5361d43fba72628b3be43bb75\"" Dec 13 00:23:03.353712 containerd[1654]: time="2025-12-13T00:23:03.353686242Z" level=info msg="connecting to shim 8fc20e99c0c43fc7752c2737804a374ee6aa3cf5361d43fba72628b3be43bb75" address="unix:///run/containerd/s/6f0aaa3611723287fe322e59974922f067c8e3308952236d46720a6b7cd51d0a" protocol=ttrpc version=3 Dec 13 00:23:03.378577 systemd[1]: Started cri-containerd-8fc20e99c0c43fc7752c2737804a374ee6aa3cf5361d43fba72628b3be43bb75.scope - libcontainer container 8fc20e99c0c43fc7752c2737804a374ee6aa3cf5361d43fba72628b3be43bb75. Dec 13 00:23:03.441661 kubelet[2863]: I1213 00:23:03.441590 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpv4\" (UniqueName: \"kubernetes.io/projected/d8099e4f-f0a3-4cec-9ab6-2e09c3f7cabf-kube-api-access-htpv4\") pod \"tigera-operator-65cdcdfd6d-rfvxr\" (UID: \"d8099e4f-f0a3-4cec-9ab6-2e09c3f7cabf\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-rfvxr" Dec 13 00:23:03.441661 kubelet[2863]: I1213 00:23:03.441639 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8099e4f-f0a3-4cec-9ab6-2e09c3f7cabf-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-rfvxr\" (UID: \"d8099e4f-f0a3-4cec-9ab6-2e09c3f7cabf\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-rfvxr" Dec 13 00:23:03.459000 audit: BPF prog-id=142 op=LOAD Dec 13 00:23:03.459000 audit[2962]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2926 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633230653939633063343366633737353263323733373830346133 Dec 13 00:23:03.459000 audit: BPF prog-id=143 op=LOAD Dec 13 00:23:03.459000 audit[2962]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2926 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633230653939633063343366633737353263323733373830346133 Dec 13 00:23:03.459000 audit: BPF prog-id=143 op=UNLOAD Dec 13 00:23:03.459000 audit[2962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633230653939633063343366633737353263323733373830346133 Dec 13 00:23:03.459000 audit: BPF prog-id=142 op=UNLOAD Dec 13 00:23:03.459000 audit[2962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633230653939633063343366633737353263323733373830346133 Dec 13 00:23:03.459000 audit: BPF prog-id=144 op=LOAD Dec 13 00:23:03.459000 audit[2962]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2926 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633230653939633063343366633737353263323733373830346133 Dec 13 00:23:03.482616 containerd[1654]: time="2025-12-13T00:23:03.482568448Z" level=info msg="StartContainer for \"8fc20e99c0c43fc7752c2737804a374ee6aa3cf5361d43fba72628b3be43bb75\" returns successfully" Dec 13 00:23:03.653801 containerd[1654]: time="2025-12-13T00:23:03.653641648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-rfvxr,Uid:d8099e4f-f0a3-4cec-9ab6-2e09c3f7cabf,Namespace:tigera-operator,Attempt:0,}" Dec 13 00:23:03.688418 containerd[1654]: time="2025-12-13T00:23:03.688354309Z" level=info msg="connecting to shim 0859a722d87a76bc05447de512520e7bc07f3cf595adf22688855f8fda21bd44" address="unix:///run/containerd/s/c29e002f58b7c0c805ddb583a4e2fbc0aacac108ffcb0d8ab14e8b00a4f3dfb0" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:03.734608 systemd[1]: Started cri-containerd-0859a722d87a76bc05447de512520e7bc07f3cf595adf22688855f8fda21bd44.scope - libcontainer container 0859a722d87a76bc05447de512520e7bc07f3cf595adf22688855f8fda21bd44. Dec 13 00:23:03.735000 audit[3061]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.735000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe484d5630 a2=0 a3=7ffe484d561c items=0 ppid=2976 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.735000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 00:23:03.738000 audit[3063]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.738000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc47943e80 a2=0 a3=7ffc47943e6c items=0 ppid=2976 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 00:23:03.739000 audit[3066]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.739000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff79642630 a2=0 a3=7fff7964261c items=0 ppid=2976 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 00:23:03.741000 audit[3068]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.741000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7707cdd0 a2=0 a3=7ffd7707cdbc items=0 ppid=2976 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 00:23:03.743000 audit[3071]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.743000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff89e0d0f0 a2=0 a3=7fff89e0d0dc items=0 ppid=2976 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 00:23:03.743000 audit[3070]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.743000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec801f400 a2=0 a3=7ffec801f3ec items=0 ppid=2976 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 00:23:03.752000 audit: BPF prog-id=145 op=LOAD Dec 13 00:23:03.753000 audit: BPF prog-id=146 op=LOAD Dec 13 00:23:03.753000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3012 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353961373232643837613736626330353434376465353132353230 Dec 13 00:23:03.753000 audit: BPF prog-id=146 op=UNLOAD Dec 13 00:23:03.753000 audit[3028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353961373232643837613736626330353434376465353132353230 Dec 13 00:23:03.753000 audit: BPF prog-id=147 op=LOAD Dec 13 00:23:03.753000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3012 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353961373232643837613736626330353434376465353132353230 Dec 13 00:23:03.754000 audit: BPF prog-id=148 op=LOAD Dec 13 00:23:03.754000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3012 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353961373232643837613736626330353434376465353132353230 Dec 13 00:23:03.754000 audit: BPF prog-id=148 op=UNLOAD Dec 13 00:23:03.754000 audit[3028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353961373232643837613736626330353434376465353132353230 Dec 13 00:23:03.754000 audit: BPF prog-id=147 op=UNLOAD Dec 13 00:23:03.754000 audit[3028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353961373232643837613736626330353434376465353132353230 Dec 13 00:23:03.754000 audit: BPF prog-id=149 op=LOAD Dec 13 00:23:03.754000 audit[3028]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3012 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353961373232643837613736626330353434376465353132353230 Dec 13 00:23:03.818789 containerd[1654]: time="2025-12-13T00:23:03.818735257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-rfvxr,Uid:d8099e4f-f0a3-4cec-9ab6-2e09c3f7cabf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0859a722d87a76bc05447de512520e7bc07f3cf595adf22688855f8fda21bd44\"" Dec 13 00:23:03.820709 containerd[1654]: time="2025-12-13T00:23:03.820681132Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 13 00:23:03.839000 audit[3084]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.839000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff64245fd0 a2=0 a3=7fff64245fbc items=0 ppid=2976 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 00:23:03.844000 audit[3086]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.844000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffedab861b0 a2=0 a3=7ffedab8619c items=0 ppid=2976 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 13 00:23:03.849000 audit[3089]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.849000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd86100740 a2=0 a3=7ffd8610072c items=0 ppid=2976 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 13 00:23:03.850000 audit[3090]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.850000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfd19a2e0 a2=0 a3=7ffdfd19a2cc items=0 ppid=2976 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 00:23:03.854000 audit[3092]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.854000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1ec7d9c0 a2=0 a3=7ffc1ec7d9ac items=0 ppid=2976 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 00:23:03.857000 audit[3093]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.857000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb6b09020 a2=0 a3=7ffeb6b0900c items=0 ppid=2976 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 00:23:03.861000 audit[3095]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.861000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff5f7ba720 a2=0 a3=7fff5f7ba70c items=0 ppid=2976 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:23:03.866000 audit[3098]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.866000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc25cebbc0 a2=0 a3=7ffc25cebbac items=0 ppid=2976 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:23:03.868000 audit[3099]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.868000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9ba87500 a2=0 a3=7fff9ba874ec items=0 ppid=2976 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 00:23:03.872000 audit[3101]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.872000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeebc641e0 a2=0 a3=7ffeebc641cc items=0 ppid=2976 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 00:23:03.873000 audit[3102]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.873000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc518d3900 a2=0 a3=7ffc518d38ec items=0 ppid=2976 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 00:23:03.877000 audit[3104]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.877000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde55357f0 a2=0 a3=7ffde55357dc items=0 ppid=2976 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 13 00:23:03.881000 audit[3107]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.881000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc39caa6f0 a2=0 a3=7ffc39caa6dc items=0 ppid=2976 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 13 00:23:03.887000 audit[3110]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.887000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff868103b0 a2=0 a3=7fff8681039c items=0 ppid=2976 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 13 00:23:03.888000 audit[3111]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.888000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc31c89e00 a2=0 a3=7ffc31c89dec items=0 ppid=2976 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 00:23:03.891000 audit[3113]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.891000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffbc47bcb0 a2=0 a3=7fffbc47bc9c items=0 ppid=2976 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:23:03.896000 audit[3116]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.896000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb492a7d0 a2=0 a3=7ffeb492a7bc items=0 ppid=2976 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:23:03.898000 audit[3117]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.898000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4fe46e80 a2=0 a3=7fff4fe46e6c items=0 ppid=2976 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 00:23:03.901000 audit[3119]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:23:03.901000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd900873c0 a2=0 a3=7ffd900873ac items=0 ppid=2976 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.901000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 00:23:03.924000 audit[3125]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:03.924000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc50577750 a2=0 a3=7ffc5057773c items=0 ppid=2976 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:03.934000 audit[3125]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:03.934000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc50577750 a2=0 a3=7ffc5057773c items=0 ppid=2976 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:03.936000 audit[3130]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.936000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe04a39230 a2=0 a3=7ffe04a3921c items=0 ppid=2976 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 00:23:03.940000 audit[3132]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.940000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffda141eb00 a2=0 a3=7ffda141eaec items=0 ppid=2976 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 13 00:23:03.946000 audit[3135]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.946000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd7380f060 a2=0 a3=7ffd7380f04c items=0 ppid=2976 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 13 00:23:03.948000 audit[3136]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.948000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddf536040 a2=0 a3=7ffddf53602c items=0 ppid=2976 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 00:23:03.949943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247399484.mount: Deactivated successfully. Dec 13 00:23:03.952000 audit[3138]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.952000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc05d8d180 a2=0 a3=7ffc05d8d16c items=0 ppid=2976 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 00:23:03.954000 audit[3139]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.954000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf6b0f0d0 a2=0 a3=7ffdf6b0f0bc items=0 ppid=2976 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 00:23:03.958000 audit[3141]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.958000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeb4d9bb00 a2=0 a3=7ffeb4d9baec items=0 ppid=2976 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:23:03.963000 audit[3144]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.963000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc91873cc0 a2=0 a3=7ffc91873cac items=0 ppid=2976 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:23:03.964000 audit[3145]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.964000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea8bf58b0 a2=0 a3=7ffea8bf589c items=0 ppid=2976 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 00:23:03.968000 audit[3147]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.968000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc17775a10 a2=0 a3=7ffc177759fc items=0 ppid=2976 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 00:23:03.969000 audit[3148]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.969000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb0ac5c30 a2=0 a3=7ffcb0ac5c1c items=0 ppid=2976 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 00:23:03.973000 audit[3150]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.973000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd509d8750 a2=0 a3=7ffd509d873c items=0 ppid=2976 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 13 00:23:03.978000 audit[3153]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.978000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff71ca2630 a2=0 a3=7fff71ca261c items=0 ppid=2976 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.978000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 13 00:23:03.983000 audit[3156]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.983000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc068f6040 a2=0 a3=7ffc068f602c items=0 ppid=2976 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.983000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 13 00:23:03.985000 audit[3157]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.985000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc68876690 a2=0 a3=7ffc6887667c items=0 ppid=2976 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 00:23:03.988000 audit[3159]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.988000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdbe8bb0d0 a2=0 a3=7ffdbe8bb0bc items=0 ppid=2976 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.988000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:23:03.993000 audit[3162]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.993000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8d066500 a2=0 a3=7fff8d0664ec items=0 ppid=2976 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:23:03.994000 audit[3163]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.994000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff62fb3be0 a2=0 a3=7fff62fb3bcc items=0 ppid=2976 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 00:23:03.997000 audit[3165]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.997000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffaf480550 a2=0 a3=7fffaf48053c items=0 ppid=2976 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 00:23:03.999000 audit[3166]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:03.999000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe99263b0 a2=0 a3=7fffe992639c items=0 ppid=2976 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:23:04.002000 audit[3168]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:04.002000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd85c50280 a2=0 a3=7ffd85c5026c items=0 ppid=2976 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:04.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:23:04.007000 audit[3171]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:23:04.007000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd31185300 a2=0 a3=7ffd311852ec items=0 ppid=2976 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:04.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:23:04.011000 audit[3173]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 00:23:04.011000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffec088feb0 a2=0 a3=7ffec088fe9c items=0 ppid=2976 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:04.011000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:04.012000 audit[3173]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 00:23:04.012000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffec088feb0 a2=0 a3=7ffec088fe9c items=0 ppid=2976 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:04.012000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:04.054397 kubelet[2863]: E1213 00:23:04.054360 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:04.055000 kubelet[2863]: E1213 00:23:04.054945 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:06.043251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204833859.mount: Deactivated successfully. Dec 13 00:23:07.583168 containerd[1654]: time="2025-12-13T00:23:07.583089153Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:07.584171 containerd[1654]: time="2025-12-13T00:23:07.584110195Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Dec 13 00:23:07.585325 containerd[1654]: time="2025-12-13T00:23:07.585290847Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:07.587191 containerd[1654]: time="2025-12-13T00:23:07.587141190Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:07.587792 containerd[1654]: time="2025-12-13T00:23:07.587750838Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.766874499s" Dec 13 00:23:07.587792 containerd[1654]: time="2025-12-13T00:23:07.587781436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 13 00:23:07.595069 containerd[1654]: time="2025-12-13T00:23:07.595014280Z" level=info msg="CreateContainer within sandbox \"0859a722d87a76bc05447de512520e7bc07f3cf595adf22688855f8fda21bd44\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 00:23:07.610890 containerd[1654]: time="2025-12-13T00:23:07.610831873Z" level=info msg="Container 87dd2d72bd2053e30892bb8b4ccabb5b01dad0d275341d1d3d790389dbe380fa: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:23:07.615593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093156323.mount: Deactivated successfully. Dec 13 00:23:07.617703 containerd[1654]: time="2025-12-13T00:23:07.617666348Z" level=info msg="CreateContainer within sandbox \"0859a722d87a76bc05447de512520e7bc07f3cf595adf22688855f8fda21bd44\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"87dd2d72bd2053e30892bb8b4ccabb5b01dad0d275341d1d3d790389dbe380fa\"" Dec 13 00:23:07.618135 containerd[1654]: time="2025-12-13T00:23:07.618112508Z" level=info msg="StartContainer for \"87dd2d72bd2053e30892bb8b4ccabb5b01dad0d275341d1d3d790389dbe380fa\"" Dec 13 00:23:07.618892 containerd[1654]: time="2025-12-13T00:23:07.618866797Z" level=info msg="connecting to shim 87dd2d72bd2053e30892bb8b4ccabb5b01dad0d275341d1d3d790389dbe380fa" address="unix:///run/containerd/s/c29e002f58b7c0c805ddb583a4e2fbc0aacac108ffcb0d8ab14e8b00a4f3dfb0" protocol=ttrpc version=3 Dec 13 00:23:07.650448 systemd[1]: Started cri-containerd-87dd2d72bd2053e30892bb8b4ccabb5b01dad0d275341d1d3d790389dbe380fa.scope - libcontainer container 87dd2d72bd2053e30892bb8b4ccabb5b01dad0d275341d1d3d790389dbe380fa. Dec 13 00:23:07.665000 audit: BPF prog-id=150 op=LOAD Dec 13 00:23:07.665000 audit: BPF prog-id=151 op=LOAD Dec 13 00:23:07.665000 audit[3182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3012 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646432643732626432303533653330383932626238623463636162 Dec 13 00:23:07.665000 audit: BPF prog-id=151 op=UNLOAD Dec 13 00:23:07.665000 audit[3182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646432643732626432303533653330383932626238623463636162 Dec 13 00:23:07.665000 audit: BPF prog-id=152 op=LOAD Dec 13 00:23:07.665000 audit[3182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3012 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646432643732626432303533653330383932626238623463636162 Dec 13 00:23:07.665000 audit: BPF prog-id=153 op=LOAD Dec 13 00:23:07.665000 audit[3182]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3012 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646432643732626432303533653330383932626238623463636162 Dec 13 00:23:07.665000 audit: BPF prog-id=153 op=UNLOAD Dec 13 00:23:07.665000 audit[3182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646432643732626432303533653330383932626238623463636162 Dec 13 00:23:07.665000 audit: BPF prog-id=152 op=UNLOAD Dec 13 00:23:07.665000 audit[3182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3012 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646432643732626432303533653330383932626238623463636162 Dec 13 00:23:07.665000 audit: BPF prog-id=154 op=LOAD Dec 13 00:23:07.665000 audit[3182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3012 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:07.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837646432643732626432303533653330383932626238623463636162 Dec 13 00:23:07.687308 containerd[1654]: time="2025-12-13T00:23:07.687227303Z" level=info msg="StartContainer for \"87dd2d72bd2053e30892bb8b4ccabb5b01dad0d275341d1d3d790389dbe380fa\" returns successfully" Dec 13 00:23:08.072772 kubelet[2863]: I1213 00:23:08.072703 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5r7ng" podStartSLOduration=6.072675507 podStartE2EDuration="6.072675507s" podCreationTimestamp="2025-12-13 00:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:23:04.075321876 +0000 UTC m=+6.164594118" watchObservedRunningTime="2025-12-13 00:23:08.072675507 +0000 UTC m=+10.161947719" Dec 13 00:23:08.073416 kubelet[2863]: I1213 00:23:08.072836 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-rfvxr" podStartSLOduration=1.303425275 podStartE2EDuration="5.072829226s" podCreationTimestamp="2025-12-13 00:23:03 +0000 UTC" firstStartedPulling="2025-12-13 00:23:03.82031418 +0000 UTC m=+5.909586392" lastFinishedPulling="2025-12-13 00:23:07.589718141 +0000 UTC m=+9.678990343" observedRunningTime="2025-12-13 00:23:08.0720025 +0000 UTC m=+10.161274712" watchObservedRunningTime="2025-12-13 00:23:08.072829226 +0000 UTC m=+10.162101438" Dec 13 00:23:09.182098 kubelet[2863]: E1213 00:23:09.182035 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:10.073393 kubelet[2863]: E1213 00:23:10.073310 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:13.600280 sudo[1865]: pam_unix(sudo:session): session closed for user root Dec 13 00:23:13.600000 audit[1865]: USER_END pid=1865 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:23:13.606801 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 13 00:23:13.606963 kernel: audit: type=1106 audit(1765585393.600:522): pid=1865 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:23:13.606995 sshd[1864]: Connection closed by 10.0.0.1 port 50284 Dec 13 00:23:13.613075 kernel: audit: type=1104 audit(1765585393.600:523): pid=1865 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:23:13.600000 audit[1865]: CRED_DISP pid=1865 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:23:13.607694 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:13.607000 audit[1860]: USER_END pid=1860 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:13.614206 systemd-logind[1637]: Session 8 logged out. Waiting for processes to exit. Dec 13 00:23:13.621680 kernel: audit: type=1106 audit(1765585393.607:524): pid=1860 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:13.614928 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:50284.service: Deactivated successfully. Dec 13 00:23:13.619506 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 00:23:13.619849 systemd[1]: session-8.scope: Consumed 6.570s CPU time, 199.7M memory peak. Dec 13 00:23:13.624618 systemd-logind[1637]: Removed session 8. Dec 13 00:23:13.607000 audit[1860]: CRED_DISP pid=1860 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:13.638457 kernel: audit: type=1104 audit(1765585393.607:525): pid=1860 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:13.638541 kernel: audit: type=1131 audit(1765585393.613:526): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.91:22-10.0.0.1:50284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:13.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.91:22-10.0.0.1:50284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:13.995000 audit[3272]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:14.001384 kernel: audit: type=1325 audit(1765585393.995:527): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:13.995000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd45881b70 a2=0 a3=7ffd45881b5c items=0 ppid=2976 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:14.010329 kernel: audit: type=1300 audit(1765585393.995:527): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd45881b70 a2=0 a3=7ffd45881b5c items=0 ppid=2976 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:13.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:14.017141 kernel: audit: type=1327 audit(1765585393.995:527): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:14.017211 kernel: audit: type=1325 audit(1765585394.010:528): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:14.010000 audit[3272]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:14.010000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd45881b70 a2=0 a3=0 items=0 ppid=2976 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:14.027457 kernel: audit: type=1300 audit(1765585394.010:528): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd45881b70 a2=0 a3=0 items=0 ppid=2976 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:14.010000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:15.074000 audit[3274]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:15.074000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf0666880 a2=0 a3=7ffcf066686c items=0 ppid=2976 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:15.074000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:15.080000 audit[3274]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:15.080000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf0666880 a2=0 a3=0 items=0 ppid=2976 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:15.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:16.663000 audit[3277]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:16.663000 audit[3277]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe4963dde0 a2=0 a3=7ffe4963ddcc items=0 ppid=2976 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:16.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:16.667000 audit[3277]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:16.667000 audit[3277]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4963dde0 a2=0 a3=0 items=0 ppid=2976 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:16.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:17.685000 audit[3279]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:17.685000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe76e9c8f0 a2=0 a3=7ffe76e9c8dc items=0 ppid=2976 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:17.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:17.692000 audit[3279]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:17.692000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe76e9c8f0 a2=0 a3=0 items=0 ppid=2976 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:17.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:18.409560 systemd[1]: Created slice kubepods-besteffort-pod05d96039_03bb_47ce_b66a_d0ab21c866d2.slice - libcontainer container kubepods-besteffort-pod05d96039_03bb_47ce_b66a_d0ab21c866d2.slice. Dec 13 00:23:18.441537 kubelet[2863]: I1213 00:23:18.441476 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d96039-03bb-47ce-b66a-d0ab21c866d2-tigera-ca-bundle\") pod \"calico-typha-5b6dcf946f-b54vp\" (UID: \"05d96039-03bb-47ce-b66a-d0ab21c866d2\") " pod="calico-system/calico-typha-5b6dcf946f-b54vp" Dec 13 00:23:18.441537 kubelet[2863]: I1213 00:23:18.441531 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66r6v\" (UniqueName: \"kubernetes.io/projected/05d96039-03bb-47ce-b66a-d0ab21c866d2-kube-api-access-66r6v\") pod \"calico-typha-5b6dcf946f-b54vp\" (UID: \"05d96039-03bb-47ce-b66a-d0ab21c866d2\") " pod="calico-system/calico-typha-5b6dcf946f-b54vp" Dec 13 00:23:18.442086 kubelet[2863]: I1213 00:23:18.441633 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05d96039-03bb-47ce-b66a-d0ab21c866d2-typha-certs\") pod \"calico-typha-5b6dcf946f-b54vp\" (UID: \"05d96039-03bb-47ce-b66a-d0ab21c866d2\") " pod="calico-system/calico-typha-5b6dcf946f-b54vp" Dec 13 00:23:18.547733 systemd[1]: Created slice kubepods-besteffort-pod471fe096_2f9e_4c72_be8c_42027ce17794.slice - libcontainer container kubepods-besteffort-pod471fe096_2f9e_4c72_be8c_42027ce17794.slice. Dec 13 00:23:18.643058 kubelet[2863]: I1213 00:23:18.642949 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-flexvol-driver-host\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643058 kubelet[2863]: I1213 00:23:18.643013 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-lib-modules\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643058 kubelet[2863]: I1213 00:23:18.643044 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-policysync\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643058 kubelet[2863]: I1213 00:23:18.643068 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-xtables-lock\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643487 kubelet[2863]: I1213 00:23:18.643202 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-cni-bin-dir\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643487 kubelet[2863]: I1213 00:23:18.643288 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-cni-log-dir\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643487 kubelet[2863]: I1213 00:23:18.643311 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-var-lib-calico\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643487 kubelet[2863]: I1213 00:23:18.643330 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-var-run-calico\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643487 kubelet[2863]: I1213 00:23:18.643368 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/471fe096-2f9e-4c72-be8c-42027ce17794-cni-net-dir\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643744 kubelet[2863]: I1213 00:23:18.643398 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/471fe096-2f9e-4c72-be8c-42027ce17794-node-certs\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643744 kubelet[2863]: I1213 00:23:18.643424 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/471fe096-2f9e-4c72-be8c-42027ce17794-tigera-ca-bundle\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.643744 kubelet[2863]: I1213 00:23:18.643455 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cps7\" (UniqueName: \"kubernetes.io/projected/471fe096-2f9e-4c72-be8c-42027ce17794-kube-api-access-5cps7\") pod \"calico-node-nkqp5\" (UID: \"471fe096-2f9e-4c72-be8c-42027ce17794\") " pod="calico-system/calico-node-nkqp5" Dec 13 00:23:18.708000 audit[3283]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:18.710371 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 00:23:18.710493 kernel: audit: type=1325 audit(1765585398.708:535): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:18.708000 audit[3283]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff8baea8a0 a2=0 a3=7fff8baea88c items=0 ppid=2976 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.719750 kernel: audit: type=1300 audit(1765585398.708:535): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff8baea8a0 a2=0 a3=7fff8baea88c items=0 ppid=2976 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.719844 kernel: audit: type=1327 audit(1765585398.708:535): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:18.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:18.723000 audit[3283]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:18.723000 audit[3283]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8baea8a0 a2=0 a3=0 items=0 ppid=2976 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.735491 kernel: audit: type=1325 audit(1765585398.723:536): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:18.735608 kernel: audit: type=1300 audit(1765585398.723:536): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8baea8a0 a2=0 a3=0 items=0 ppid=2976 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.735632 kernel: audit: type=1327 audit(1765585398.723:536): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:18.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:18.745581 kubelet[2863]: E1213 00:23:18.745484 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.745581 kubelet[2863]: W1213 00:23:18.745556 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.745581 kubelet[2863]: E1213 00:23:18.745586 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.746135 kubelet[2863]: E1213 00:23:18.746079 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.746135 kubelet[2863]: W1213 00:23:18.746113 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.746135 kubelet[2863]: E1213 00:23:18.746137 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.750642 kubelet[2863]: E1213 00:23:18.750601 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.750642 kubelet[2863]: W1213 00:23:18.750628 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.750642 kubelet[2863]: E1213 00:23:18.750650 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.819456 kubelet[2863]: E1213 00:23:18.819393 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:18.822878 containerd[1654]: time="2025-12-13T00:23:18.822818768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6dcf946f-b54vp,Uid:05d96039-03bb-47ce-b66a-d0ab21c866d2,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:18.876788 kubelet[2863]: E1213 00:23:18.876748 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.877305 kubelet[2863]: W1213 00:23:18.877083 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.877305 kubelet[2863]: E1213 00:23:18.877114 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.889590 kubelet[2863]: E1213 00:23:18.889076 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:18.904319 containerd[1654]: time="2025-12-13T00:23:18.904228081Z" level=info msg="connecting to shim e44f15b91b8fbbe17d1592da8ecffe39d1cfe4ba26c1a9c3dc0e36caffaa2074" address="unix:///run/containerd/s/cb33134b19ba05584385b370a197a4f4b4dc733e424293197a247f8190f787e9" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:18.921280 kubelet[2863]: E1213 00:23:18.919744 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.921280 kubelet[2863]: W1213 00:23:18.919782 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.921280 kubelet[2863]: E1213 00:23:18.919805 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.921840 kubelet[2863]: E1213 00:23:18.921811 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.921840 kubelet[2863]: W1213 00:23:18.921830 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.921840 kubelet[2863]: E1213 00:23:18.921840 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.923309 kubelet[2863]: E1213 00:23:18.922111 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.923309 kubelet[2863]: W1213 00:23:18.922133 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.923309 kubelet[2863]: E1213 00:23:18.922165 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.923309 kubelet[2863]: E1213 00:23:18.922692 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.923309 kubelet[2863]: W1213 00:23:18.922703 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.923309 kubelet[2863]: E1213 00:23:18.922713 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.923309 kubelet[2863]: E1213 00:23:18.923010 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.923309 kubelet[2863]: W1213 00:23:18.923021 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.923309 kubelet[2863]: E1213 00:23:18.923033 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.923309 kubelet[2863]: E1213 00:23:18.923323 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.923686 kubelet[2863]: W1213 00:23:18.923337 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.923686 kubelet[2863]: E1213 00:23:18.923349 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.923686 kubelet[2863]: E1213 00:23:18.923632 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.923686 kubelet[2863]: W1213 00:23:18.923643 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.923686 kubelet[2863]: E1213 00:23:18.923654 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.924309 kubelet[2863]: E1213 00:23:18.923914 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.924309 kubelet[2863]: W1213 00:23:18.923931 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.924309 kubelet[2863]: E1213 00:23:18.923942 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.924507 kubelet[2863]: E1213 00:23:18.924328 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.924507 kubelet[2863]: W1213 00:23:18.924346 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.924507 kubelet[2863]: E1213 00:23:18.924359 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.925029 kubelet[2863]: E1213 00:23:18.924673 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.925029 kubelet[2863]: W1213 00:23:18.924684 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.925029 kubelet[2863]: E1213 00:23:18.924695 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.925464 kubelet[2863]: E1213 00:23:18.925111 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.925464 kubelet[2863]: W1213 00:23:18.925121 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.925464 kubelet[2863]: E1213 00:23:18.925133 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.925464 kubelet[2863]: E1213 00:23:18.925424 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.925464 kubelet[2863]: W1213 00:23:18.925434 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.925464 kubelet[2863]: E1213 00:23:18.925444 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.926035 kubelet[2863]: E1213 00:23:18.925708 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.926035 kubelet[2863]: W1213 00:23:18.925719 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.926035 kubelet[2863]: E1213 00:23:18.925730 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.926642 kubelet[2863]: E1213 00:23:18.926608 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.926642 kubelet[2863]: W1213 00:23:18.926634 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.926714 kubelet[2863]: E1213 00:23:18.926660 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.927093 kubelet[2863]: E1213 00:23:18.926908 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.927093 kubelet[2863]: W1213 00:23:18.926926 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.927093 kubelet[2863]: E1213 00:23:18.926935 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.927750 kubelet[2863]: E1213 00:23:18.927127 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.927750 kubelet[2863]: W1213 00:23:18.927136 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.927750 kubelet[2863]: E1213 00:23:18.927157 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.927750 kubelet[2863]: E1213 00:23:18.927738 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.927750 kubelet[2863]: W1213 00:23:18.927746 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.927750 kubelet[2863]: E1213 00:23:18.927756 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.928092 kubelet[2863]: E1213 00:23:18.927952 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.928092 kubelet[2863]: W1213 00:23:18.927960 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.928092 kubelet[2863]: E1213 00:23:18.927978 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.928426 kubelet[2863]: E1213 00:23:18.928281 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.928426 kubelet[2863]: W1213 00:23:18.928292 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.928426 kubelet[2863]: E1213 00:23:18.928304 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.928954 kubelet[2863]: E1213 00:23:18.928629 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.928954 kubelet[2863]: W1213 00:23:18.928638 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.928954 kubelet[2863]: E1213 00:23:18.928647 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.946318 kubelet[2863]: E1213 00:23:18.946272 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.946318 kubelet[2863]: W1213 00:23:18.946302 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.946318 kubelet[2863]: E1213 00:23:18.946324 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.946551 kubelet[2863]: I1213 00:23:18.946407 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/43eaf899-3f04-44ec-95d7-4d02448959a8-socket-dir\") pod \"csi-node-driver-tp27z\" (UID: \"43eaf899-3f04-44ec-95d7-4d02448959a8\") " pod="calico-system/csi-node-driver-tp27z" Dec 13 00:23:18.946986 kubelet[2863]: E1213 00:23:18.946818 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.946986 kubelet[2863]: W1213 00:23:18.946834 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.946986 kubelet[2863]: E1213 00:23:18.946843 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.946986 kubelet[2863]: I1213 00:23:18.946872 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpqw\" (UniqueName: \"kubernetes.io/projected/43eaf899-3f04-44ec-95d7-4d02448959a8-kube-api-access-ngpqw\") pod \"csi-node-driver-tp27z\" (UID: \"43eaf899-3f04-44ec-95d7-4d02448959a8\") " pod="calico-system/csi-node-driver-tp27z" Dec 13 00:23:18.948259 kubelet[2863]: E1213 00:23:18.947156 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.948259 kubelet[2863]: W1213 00:23:18.947170 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.948259 kubelet[2863]: E1213 00:23:18.947179 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.948259 kubelet[2863]: I1213 00:23:18.947201 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/43eaf899-3f04-44ec-95d7-4d02448959a8-registration-dir\") pod \"csi-node-driver-tp27z\" (UID: \"43eaf899-3f04-44ec-95d7-4d02448959a8\") " pod="calico-system/csi-node-driver-tp27z" Dec 13 00:23:18.948259 kubelet[2863]: E1213 00:23:18.947481 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.948259 kubelet[2863]: W1213 00:23:18.947490 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.948259 kubelet[2863]: E1213 00:23:18.947499 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.948259 kubelet[2863]: I1213 00:23:18.947525 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43eaf899-3f04-44ec-95d7-4d02448959a8-kubelet-dir\") pod \"csi-node-driver-tp27z\" (UID: \"43eaf899-3f04-44ec-95d7-4d02448959a8\") " pod="calico-system/csi-node-driver-tp27z" Dec 13 00:23:18.948259 kubelet[2863]: E1213 00:23:18.947791 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.948486 kubelet[2863]: W1213 00:23:18.947801 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.948486 kubelet[2863]: E1213 00:23:18.947810 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.948486 kubelet[2863]: I1213 00:23:18.948070 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/43eaf899-3f04-44ec-95d7-4d02448959a8-varrun\") pod \"csi-node-driver-tp27z\" (UID: \"43eaf899-3f04-44ec-95d7-4d02448959a8\") " pod="calico-system/csi-node-driver-tp27z" Dec 13 00:23:18.948486 kubelet[2863]: E1213 00:23:18.948468 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.948486 kubelet[2863]: W1213 00:23:18.948478 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.948486 kubelet[2863]: E1213 00:23:18.948487 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.948748 kubelet[2863]: E1213 00:23:18.948725 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.948748 kubelet[2863]: W1213 00:23:18.948740 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.948748 kubelet[2863]: E1213 00:23:18.948748 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.949021 kubelet[2863]: E1213 00:23:18.948996 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.949021 kubelet[2863]: W1213 00:23:18.949012 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.949021 kubelet[2863]: E1213 00:23:18.949022 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.949300 kubelet[2863]: E1213 00:23:18.949279 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.949300 kubelet[2863]: W1213 00:23:18.949292 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.949300 kubelet[2863]: E1213 00:23:18.949301 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.950578 kubelet[2863]: E1213 00:23:18.950544 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.950578 kubelet[2863]: W1213 00:23:18.950560 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.950578 kubelet[2863]: E1213 00:23:18.950569 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.951190 kubelet[2863]: E1213 00:23:18.951157 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.951229 kubelet[2863]: W1213 00:23:18.951176 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.951333 kubelet[2863]: E1213 00:23:18.951293 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.951735 kubelet[2863]: E1213 00:23:18.951707 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.951735 kubelet[2863]: W1213 00:23:18.951720 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.951735 kubelet[2863]: E1213 00:23:18.951729 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.952412 kubelet[2863]: E1213 00:23:18.952394 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.952412 kubelet[2863]: W1213 00:23:18.952407 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.952491 kubelet[2863]: E1213 00:23:18.952417 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.952850 kubelet[2863]: E1213 00:23:18.952830 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.952962 kubelet[2863]: W1213 00:23:18.952844 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.952998 kubelet[2863]: E1213 00:23:18.952983 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.953713 kubelet[2863]: E1213 00:23:18.953693 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:18.953713 kubelet[2863]: W1213 00:23:18.953706 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:18.953795 kubelet[2863]: E1213 00:23:18.953716 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:18.954530 systemd[1]: Started cri-containerd-e44f15b91b8fbbe17d1592da8ecffe39d1cfe4ba26c1a9c3dc0e36caffaa2074.scope - libcontainer container e44f15b91b8fbbe17d1592da8ecffe39d1cfe4ba26c1a9c3dc0e36caffaa2074. Dec 13 00:23:18.972000 audit: BPF prog-id=155 op=LOAD Dec 13 00:23:18.974327 kernel: audit: type=1334 audit(1765585398.972:537): prog-id=155 op=LOAD Dec 13 00:23:18.973000 audit: BPF prog-id=156 op=LOAD Dec 13 00:23:18.973000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3304 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.981894 kernel: audit: type=1334 audit(1765585398.973:538): prog-id=156 op=LOAD Dec 13 00:23:18.981983 kernel: audit: type=1300 audit(1765585398.973:538): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3304 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534346631356239316238666262653137643135393264613865636666 Dec 13 00:23:18.973000 audit: BPF prog-id=156 op=UNLOAD Dec 13 00:23:18.973000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534346631356239316238666262653137643135393264613865636666 Dec 13 00:23:18.974000 audit: BPF prog-id=157 op=LOAD Dec 13 00:23:18.974000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3304 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534346631356239316238666262653137643135393264613865636666 Dec 13 00:23:18.974000 audit: BPF prog-id=158 op=LOAD Dec 13 00:23:18.974000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3304 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534346631356239316238666262653137643135393264613865636666 Dec 13 00:23:18.974000 audit: BPF prog-id=158 op=UNLOAD Dec 13 00:23:18.974000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534346631356239316238666262653137643135393264613865636666 Dec 13 00:23:18.988255 kernel: audit: type=1327 audit(1765585398.973:538): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534346631356239316238666262653137643135393264613865636666 Dec 13 00:23:18.974000 audit: BPF prog-id=157 op=UNLOAD Dec 13 00:23:18.974000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534346631356239316238666262653137643135393264613865636666 Dec 13 00:23:18.974000 audit: BPF prog-id=159 op=LOAD Dec 13 00:23:18.974000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3304 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:18.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534346631356239316238666262653137643135393264613865636666 Dec 13 00:23:19.018033 containerd[1654]: time="2025-12-13T00:23:19.017981700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6dcf946f-b54vp,Uid:05d96039-03bb-47ce-b66a-d0ab21c866d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e44f15b91b8fbbe17d1592da8ecffe39d1cfe4ba26c1a9c3dc0e36caffaa2074\"" Dec 13 00:23:19.018665 kubelet[2863]: E1213 00:23:19.018631 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:19.022610 containerd[1654]: time="2025-12-13T00:23:19.022565829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 13 00:23:19.049670 kubelet[2863]: E1213 00:23:19.049608 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.049670 kubelet[2863]: W1213 00:23:19.049632 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.049670 kubelet[2863]: E1213 00:23:19.049653 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.049951 kubelet[2863]: E1213 00:23:19.049918 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.049951 kubelet[2863]: W1213 00:23:19.049934 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.049951 kubelet[2863]: E1213 00:23:19.049948 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.050412 kubelet[2863]: E1213 00:23:19.050379 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.050412 kubelet[2863]: W1213 00:23:19.050407 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.050490 kubelet[2863]: E1213 00:23:19.050428 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.050640 kubelet[2863]: E1213 00:23:19.050621 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.050640 kubelet[2863]: W1213 00:23:19.050633 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.050640 kubelet[2863]: E1213 00:23:19.050641 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.050888 kubelet[2863]: E1213 00:23:19.050868 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.050888 kubelet[2863]: W1213 00:23:19.050881 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.050946 kubelet[2863]: E1213 00:23:19.050891 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.051162 kubelet[2863]: E1213 00:23:19.051131 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.051162 kubelet[2863]: W1213 00:23:19.051160 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.051252 kubelet[2863]: E1213 00:23:19.051172 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.051450 kubelet[2863]: E1213 00:23:19.051430 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.051450 kubelet[2863]: W1213 00:23:19.051443 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.051450 kubelet[2863]: E1213 00:23:19.051451 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.051672 kubelet[2863]: E1213 00:23:19.051652 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.051672 kubelet[2863]: W1213 00:23:19.051664 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.051672 kubelet[2863]: E1213 00:23:19.051671 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.051891 kubelet[2863]: E1213 00:23:19.051874 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.051891 kubelet[2863]: W1213 00:23:19.051885 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.051891 kubelet[2863]: E1213 00:23:19.051893 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.052134 kubelet[2863]: E1213 00:23:19.052116 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.052134 kubelet[2863]: W1213 00:23:19.052127 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.052228 kubelet[2863]: E1213 00:23:19.052146 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.052388 kubelet[2863]: E1213 00:23:19.052369 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.052388 kubelet[2863]: W1213 00:23:19.052380 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.052388 kubelet[2863]: E1213 00:23:19.052389 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.052597 kubelet[2863]: E1213 00:23:19.052579 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.052597 kubelet[2863]: W1213 00:23:19.052590 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.052597 kubelet[2863]: E1213 00:23:19.052598 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.052824 kubelet[2863]: E1213 00:23:19.052806 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.052824 kubelet[2863]: W1213 00:23:19.052817 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.052824 kubelet[2863]: E1213 00:23:19.052825 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.053030 kubelet[2863]: E1213 00:23:19.053012 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.053030 kubelet[2863]: W1213 00:23:19.053023 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.053030 kubelet[2863]: E1213 00:23:19.053031 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.053296 kubelet[2863]: E1213 00:23:19.053276 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.053296 kubelet[2863]: W1213 00:23:19.053287 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.053296 kubelet[2863]: E1213 00:23:19.053296 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.053556 kubelet[2863]: E1213 00:23:19.053538 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.053556 kubelet[2863]: W1213 00:23:19.053553 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.053644 kubelet[2863]: E1213 00:23:19.053567 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.053832 kubelet[2863]: E1213 00:23:19.053811 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.053832 kubelet[2863]: W1213 00:23:19.053824 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.053832 kubelet[2863]: E1213 00:23:19.053835 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.054115 kubelet[2863]: E1213 00:23:19.054088 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.054115 kubelet[2863]: W1213 00:23:19.054106 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.054230 kubelet[2863]: E1213 00:23:19.054120 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.054391 kubelet[2863]: E1213 00:23:19.054367 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.054391 kubelet[2863]: W1213 00:23:19.054381 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.054391 kubelet[2863]: E1213 00:23:19.054392 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.054643 kubelet[2863]: E1213 00:23:19.054616 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.054643 kubelet[2863]: W1213 00:23:19.054631 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.054643 kubelet[2863]: E1213 00:23:19.054642 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.054937 kubelet[2863]: E1213 00:23:19.054903 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.054937 kubelet[2863]: W1213 00:23:19.054922 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.054937 kubelet[2863]: E1213 00:23:19.054934 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.055171 kubelet[2863]: E1213 00:23:19.055152 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.055171 kubelet[2863]: W1213 00:23:19.055164 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.055300 kubelet[2863]: E1213 00:23:19.055175 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.055428 kubelet[2863]: E1213 00:23:19.055412 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.055428 kubelet[2863]: W1213 00:23:19.055423 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.055488 kubelet[2863]: E1213 00:23:19.055434 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.055688 kubelet[2863]: E1213 00:23:19.055671 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.055688 kubelet[2863]: W1213 00:23:19.055682 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.055753 kubelet[2863]: E1213 00:23:19.055692 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.056312 kubelet[2863]: E1213 00:23:19.056288 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.056312 kubelet[2863]: W1213 00:23:19.056302 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.056312 kubelet[2863]: E1213 00:23:19.056313 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.067300 kubelet[2863]: E1213 00:23:19.067251 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:19.067300 kubelet[2863]: W1213 00:23:19.067280 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:19.067300 kubelet[2863]: E1213 00:23:19.067304 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:19.158272 kubelet[2863]: E1213 00:23:19.158182 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:19.158832 containerd[1654]: time="2025-12-13T00:23:19.158787591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nkqp5,Uid:471fe096-2f9e-4c72-be8c-42027ce17794,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:19.198517 containerd[1654]: time="2025-12-13T00:23:19.198445575Z" level=info msg="connecting to shim 682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495" address="unix:///run/containerd/s/fcdb30364c17bae0bc50d94f75cb9c9cdc68e4a9f09364dd4d0a525b7ef960be" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:19.233552 systemd[1]: Started cri-containerd-682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495.scope - libcontainer container 682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495. Dec 13 00:23:19.250000 audit: BPF prog-id=160 op=LOAD Dec 13 00:23:19.250000 audit: BPF prog-id=161 op=LOAD Dec 13 00:23:19.250000 audit[3428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3415 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:19.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638326662393231376639336662313634613966653565616266353532 Dec 13 00:23:19.250000 audit: BPF prog-id=161 op=UNLOAD Dec 13 00:23:19.250000 audit[3428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:19.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638326662393231376639336662313634613966653565616266353532 Dec 13 00:23:19.250000 audit: BPF prog-id=162 op=LOAD Dec 13 00:23:19.250000 audit[3428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3415 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:19.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638326662393231376639336662313634613966653565616266353532 Dec 13 00:23:19.251000 audit: BPF prog-id=163 op=LOAD Dec 13 00:23:19.251000 audit[3428]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3415 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638326662393231376639336662313634613966653565616266353532 Dec 13 00:23:19.251000 audit: BPF prog-id=163 op=UNLOAD Dec 13 00:23:19.251000 audit[3428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638326662393231376639336662313634613966653565616266353532 Dec 13 00:23:19.251000 audit: BPF prog-id=162 op=UNLOAD Dec 13 00:23:19.251000 audit[3428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638326662393231376639336662313634613966653565616266353532 Dec 13 00:23:19.251000 audit: BPF prog-id=164 op=LOAD Dec 13 00:23:19.251000 audit[3428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3415 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:19.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638326662393231376639336662313634613966653565616266353532 Dec 13 00:23:19.276226 containerd[1654]: time="2025-12-13T00:23:19.276152698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nkqp5,Uid:471fe096-2f9e-4c72-be8c-42027ce17794,Namespace:calico-system,Attempt:0,} returns sandbox id \"682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495\"" Dec 13 00:23:19.277206 kubelet[2863]: E1213 00:23:19.277175 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:21.017466 kubelet[2863]: E1213 00:23:21.017388 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:21.822897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058534686.mount: Deactivated successfully. Dec 13 00:23:23.013866 containerd[1654]: time="2025-12-13T00:23:23.013773099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:23.017627 kubelet[2863]: E1213 00:23:23.017578 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:23.043765 containerd[1654]: time="2025-12-13T00:23:23.043548650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736634" Dec 13 00:23:23.152751 containerd[1654]: time="2025-12-13T00:23:23.152681584Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:23.198741 containerd[1654]: time="2025-12-13T00:23:23.198644283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:23.199424 containerd[1654]: time="2025-12-13T00:23:23.199373474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.17677365s" Dec 13 00:23:23.199424 containerd[1654]: time="2025-12-13T00:23:23.199420943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 13 00:23:23.200756 containerd[1654]: time="2025-12-13T00:23:23.200630106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 13 00:23:23.277693 containerd[1654]: time="2025-12-13T00:23:23.276780401Z" level=info msg="CreateContainer within sandbox \"e44f15b91b8fbbe17d1592da8ecffe39d1cfe4ba26c1a9c3dc0e36caffaa2074\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 00:23:23.652913 containerd[1654]: time="2025-12-13T00:23:23.652711734Z" level=info msg="Container 629b2dfb77e986c9f5dd6b7e24553054164d697c951c10175ac9cb864cd34415: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:23:24.112580 containerd[1654]: time="2025-12-13T00:23:24.112532519Z" level=info msg="CreateContainer within sandbox \"e44f15b91b8fbbe17d1592da8ecffe39d1cfe4ba26c1a9c3dc0e36caffaa2074\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"629b2dfb77e986c9f5dd6b7e24553054164d697c951c10175ac9cb864cd34415\"" Dec 13 00:23:24.112997 containerd[1654]: time="2025-12-13T00:23:24.112976113Z" level=info msg="StartContainer for \"629b2dfb77e986c9f5dd6b7e24553054164d697c951c10175ac9cb864cd34415\"" Dec 13 00:23:24.114453 containerd[1654]: time="2025-12-13T00:23:24.114391323Z" level=info msg="connecting to shim 629b2dfb77e986c9f5dd6b7e24553054164d697c951c10175ac9cb864cd34415" address="unix:///run/containerd/s/cb33134b19ba05584385b370a197a4f4b4dc733e424293197a247f8190f787e9" protocol=ttrpc version=3 Dec 13 00:23:24.146532 systemd[1]: Started cri-containerd-629b2dfb77e986c9f5dd6b7e24553054164d697c951c10175ac9cb864cd34415.scope - libcontainer container 629b2dfb77e986c9f5dd6b7e24553054164d697c951c10175ac9cb864cd34415. Dec 13 00:23:24.164000 audit: BPF prog-id=165 op=LOAD Dec 13 00:23:24.165560 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 13 00:23:24.165650 kernel: audit: type=1334 audit(1765585404.164:553): prog-id=165 op=LOAD Dec 13 00:23:24.169153 kernel: audit: type=1334 audit(1765585404.164:554): prog-id=166 op=LOAD Dec 13 00:23:24.164000 audit: BPF prog-id=166 op=LOAD Dec 13 00:23:24.164000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.180596 kernel: audit: type=1300 audit(1765585404.164:554): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.180659 kernel: audit: type=1327 audit(1765585404.164:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.164000 audit: BPF prog-id=166 op=UNLOAD Dec 13 00:23:24.164000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.188099 kernel: audit: type=1334 audit(1765585404.164:555): prog-id=166 op=UNLOAD Dec 13 00:23:24.188144 kernel: audit: type=1300 audit(1765585404.164:555): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.193457 kernel: audit: type=1327 audit(1765585404.164:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.193606 kernel: audit: type=1334 audit(1765585404.164:556): prog-id=167 op=LOAD Dec 13 00:23:24.164000 audit: BPF prog-id=167 op=LOAD Dec 13 00:23:24.200159 kernel: audit: type=1300 audit(1765585404.164:556): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.164000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.165000 audit: BPF prog-id=168 op=LOAD Dec 13 00:23:24.165000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.165000 audit: BPF prog-id=168 op=UNLOAD Dec 13 00:23:24.165000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.165000 audit: BPF prog-id=167 op=UNLOAD Dec 13 00:23:24.165000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.165000 audit: BPF prog-id=169 op=LOAD Dec 13 00:23:24.165000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3304 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:24.206701 kernel: audit: type=1327 audit(1765585404.164:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396232646662373765393836633966356464366237653234353533 Dec 13 00:23:24.221031 containerd[1654]: time="2025-12-13T00:23:24.220977992Z" level=info msg="StartContainer for \"629b2dfb77e986c9f5dd6b7e24553054164d697c951c10175ac9cb864cd34415\" returns successfully" Dec 13 00:23:25.018697 kubelet[2863]: E1213 00:23:25.018614 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:25.123954 kubelet[2863]: E1213 00:23:25.123899 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:25.180134 kubelet[2863]: E1213 00:23:25.180092 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.180134 kubelet[2863]: W1213 00:23:25.180120 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.180134 kubelet[2863]: E1213 00:23:25.180143 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.180427 kubelet[2863]: E1213 00:23:25.180410 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.180427 kubelet[2863]: W1213 00:23:25.180422 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.180490 kubelet[2863]: E1213 00:23:25.180432 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.180702 kubelet[2863]: E1213 00:23:25.180686 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.180702 kubelet[2863]: W1213 00:23:25.180699 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.180777 kubelet[2863]: E1213 00:23:25.180710 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.180979 kubelet[2863]: E1213 00:23:25.180959 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.180979 kubelet[2863]: W1213 00:23:25.180973 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.180979 kubelet[2863]: E1213 00:23:25.180984 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.181289 kubelet[2863]: E1213 00:23:25.181265 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.181289 kubelet[2863]: W1213 00:23:25.181278 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.181289 kubelet[2863]: E1213 00:23:25.181290 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.181525 kubelet[2863]: E1213 00:23:25.181501 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.181525 kubelet[2863]: W1213 00:23:25.181513 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.181525 kubelet[2863]: E1213 00:23:25.181524 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.181754 kubelet[2863]: E1213 00:23:25.181730 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.181754 kubelet[2863]: W1213 00:23:25.181741 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.181754 kubelet[2863]: E1213 00:23:25.181752 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.181978 kubelet[2863]: E1213 00:23:25.181954 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.181978 kubelet[2863]: W1213 00:23:25.181966 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.181978 kubelet[2863]: E1213 00:23:25.181979 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.182250 kubelet[2863]: E1213 00:23:25.182208 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.182250 kubelet[2863]: W1213 00:23:25.182222 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.182343 kubelet[2863]: E1213 00:23:25.182256 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.182493 kubelet[2863]: E1213 00:23:25.182468 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.182493 kubelet[2863]: W1213 00:23:25.182479 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.182493 kubelet[2863]: E1213 00:23:25.182491 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.182714 kubelet[2863]: E1213 00:23:25.182690 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.182714 kubelet[2863]: W1213 00:23:25.182701 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.182714 kubelet[2863]: E1213 00:23:25.182712 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.182948 kubelet[2863]: E1213 00:23:25.182923 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.182948 kubelet[2863]: W1213 00:23:25.182935 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.182948 kubelet[2863]: E1213 00:23:25.182946 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.183193 kubelet[2863]: E1213 00:23:25.183167 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.183193 kubelet[2863]: W1213 00:23:25.183179 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.183193 kubelet[2863]: E1213 00:23:25.183190 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.183441 kubelet[2863]: E1213 00:23:25.183419 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.183441 kubelet[2863]: W1213 00:23:25.183431 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.183441 kubelet[2863]: E1213 00:23:25.183442 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.183667 kubelet[2863]: E1213 00:23:25.183646 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.183667 kubelet[2863]: W1213 00:23:25.183657 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.183667 kubelet[2863]: E1213 00:23:25.183667 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.196068 kubelet[2863]: E1213 00:23:25.196033 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.196068 kubelet[2863]: W1213 00:23:25.196059 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.196219 kubelet[2863]: E1213 00:23:25.196082 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.196547 kubelet[2863]: E1213 00:23:25.196521 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.196547 kubelet[2863]: W1213 00:23:25.196535 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.196547 kubelet[2863]: E1213 00:23:25.196546 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.196837 kubelet[2863]: E1213 00:23:25.196813 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.196837 kubelet[2863]: W1213 00:23:25.196826 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.196837 kubelet[2863]: E1213 00:23:25.196835 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.197155 kubelet[2863]: E1213 00:23:25.197118 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.197155 kubelet[2863]: W1213 00:23:25.197138 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.197155 kubelet[2863]: E1213 00:23:25.197153 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.197518 kubelet[2863]: E1213 00:23:25.197484 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.197518 kubelet[2863]: W1213 00:23:25.197503 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.197610 kubelet[2863]: E1213 00:23:25.197540 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.197838 kubelet[2863]: E1213 00:23:25.197812 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.197838 kubelet[2863]: W1213 00:23:25.197827 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.197838 kubelet[2863]: E1213 00:23:25.197839 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.198122 kubelet[2863]: E1213 00:23:25.198094 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.198122 kubelet[2863]: W1213 00:23:25.198108 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.198122 kubelet[2863]: E1213 00:23:25.198119 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.198367 kubelet[2863]: E1213 00:23:25.198347 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.198367 kubelet[2863]: W1213 00:23:25.198359 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.198367 kubelet[2863]: E1213 00:23:25.198369 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.198602 kubelet[2863]: E1213 00:23:25.198581 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.198602 kubelet[2863]: W1213 00:23:25.198594 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.198602 kubelet[2863]: E1213 00:23:25.198605 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.198937 kubelet[2863]: E1213 00:23:25.198910 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.198937 kubelet[2863]: W1213 00:23:25.198927 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.199061 kubelet[2863]: E1213 00:23:25.198940 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.199226 kubelet[2863]: E1213 00:23:25.199201 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.199226 kubelet[2863]: W1213 00:23:25.199216 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.199226 kubelet[2863]: E1213 00:23:25.199228 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.199521 kubelet[2863]: E1213 00:23:25.199498 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.199521 kubelet[2863]: W1213 00:23:25.199510 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.199521 kubelet[2863]: E1213 00:23:25.199520 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.199845 kubelet[2863]: E1213 00:23:25.199820 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.199909 kubelet[2863]: W1213 00:23:25.199835 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.199909 kubelet[2863]: E1213 00:23:25.199876 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.200124 kubelet[2863]: E1213 00:23:25.200103 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.200124 kubelet[2863]: W1213 00:23:25.200115 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.200219 kubelet[2863]: E1213 00:23:25.200127 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.200368 kubelet[2863]: E1213 00:23:25.200346 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.200368 kubelet[2863]: W1213 00:23:25.200360 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.200470 kubelet[2863]: E1213 00:23:25.200371 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.200629 kubelet[2863]: E1213 00:23:25.200610 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.200629 kubelet[2863]: W1213 00:23:25.200621 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.200705 kubelet[2863]: E1213 00:23:25.200632 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.200945 kubelet[2863]: E1213 00:23:25.200923 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.200945 kubelet[2863]: W1213 00:23:25.200937 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.201045 kubelet[2863]: E1213 00:23:25.200948 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.201191 kubelet[2863]: E1213 00:23:25.201171 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:25.201191 kubelet[2863]: W1213 00:23:25.201185 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:25.201271 kubelet[2863]: E1213 00:23:25.201195 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:25.388312 kubelet[2863]: I1213 00:23:25.388117 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b6dcf946f-b54vp" podStartSLOduration=3.209875727 podStartE2EDuration="7.388097139s" podCreationTimestamp="2025-12-13 00:23:18 +0000 UTC" firstStartedPulling="2025-12-13 00:23:19.022293016 +0000 UTC m=+21.111565228" lastFinishedPulling="2025-12-13 00:23:23.200514428 +0000 UTC m=+25.289786640" observedRunningTime="2025-12-13 00:23:25.387470552 +0000 UTC m=+27.476742784" watchObservedRunningTime="2025-12-13 00:23:25.388097139 +0000 UTC m=+27.477369371" Dec 13 00:23:26.089089 containerd[1654]: time="2025-12-13T00:23:26.088944753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:26.108763 containerd[1654]: time="2025-12-13T00:23:26.108668363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:26.113377 kubelet[2863]: I1213 00:23:26.113337 2863 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:23:26.113841 kubelet[2863]: E1213 00:23:26.113784 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:26.115737 containerd[1654]: time="2025-12-13T00:23:26.115662769Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:26.119854 containerd[1654]: time="2025-12-13T00:23:26.119750703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:26.120599 containerd[1654]: time="2025-12-13T00:23:26.120549525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.91987731s" Dec 13 00:23:26.120661 containerd[1654]: time="2025-12-13T00:23:26.120598988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 13 00:23:26.126653 containerd[1654]: time="2025-12-13T00:23:26.126607010Z" level=info msg="CreateContainer within sandbox \"682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 00:23:26.141441 containerd[1654]: time="2025-12-13T00:23:26.141375757Z" level=info msg="Container 15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:23:26.152483 containerd[1654]: time="2025-12-13T00:23:26.152401490Z" level=info msg="CreateContainer within sandbox \"682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305\"" Dec 13 00:23:26.153162 containerd[1654]: time="2025-12-13T00:23:26.153117125Z" level=info msg="StartContainer for \"15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305\"" Dec 13 00:23:26.154892 containerd[1654]: time="2025-12-13T00:23:26.154843250Z" level=info msg="connecting to shim 15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305" address="unix:///run/containerd/s/fcdb30364c17bae0bc50d94f75cb9c9cdc68e4a9f09364dd4d0a525b7ef960be" protocol=ttrpc version=3 Dec 13 00:23:26.181821 systemd[1]: Started cri-containerd-15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305.scope - libcontainer container 15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305. Dec 13 00:23:26.190156 kubelet[2863]: E1213 00:23:26.190095 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.190156 kubelet[2863]: W1213 00:23:26.190118 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.190156 kubelet[2863]: E1213 00:23:26.190143 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.190442 kubelet[2863]: E1213 00:23:26.190425 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.190442 kubelet[2863]: W1213 00:23:26.190437 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.190496 kubelet[2863]: E1213 00:23:26.190446 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.190677 kubelet[2863]: E1213 00:23:26.190655 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.190677 kubelet[2863]: W1213 00:23:26.190665 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.190677 kubelet[2863]: E1213 00:23:26.190674 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.190877 kubelet[2863]: E1213 00:23:26.190862 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.190877 kubelet[2863]: W1213 00:23:26.190871 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.190929 kubelet[2863]: E1213 00:23:26.190879 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.191110 kubelet[2863]: E1213 00:23:26.191094 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.191110 kubelet[2863]: W1213 00:23:26.191106 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.191222 kubelet[2863]: E1213 00:23:26.191116 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.191345 kubelet[2863]: E1213 00:23:26.191331 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.191345 kubelet[2863]: W1213 00:23:26.191341 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.191397 kubelet[2863]: E1213 00:23:26.191349 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.191555 kubelet[2863]: E1213 00:23:26.191540 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.191555 kubelet[2863]: W1213 00:23:26.191551 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.191609 kubelet[2863]: E1213 00:23:26.191559 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.191778 kubelet[2863]: E1213 00:23:26.191761 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.191778 kubelet[2863]: W1213 00:23:26.191772 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.191830 kubelet[2863]: E1213 00:23:26.191780 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.192007 kubelet[2863]: E1213 00:23:26.191991 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.192007 kubelet[2863]: W1213 00:23:26.192002 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.192067 kubelet[2863]: E1213 00:23:26.192011 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.192219 kubelet[2863]: E1213 00:23:26.192200 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.192219 kubelet[2863]: W1213 00:23:26.192215 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.192336 kubelet[2863]: E1213 00:23:26.192227 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.192513 kubelet[2863]: E1213 00:23:26.192494 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.192513 kubelet[2863]: W1213 00:23:26.192509 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.192590 kubelet[2863]: E1213 00:23:26.192521 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.192758 kubelet[2863]: E1213 00:23:26.192743 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.192758 kubelet[2863]: W1213 00:23:26.192753 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.192821 kubelet[2863]: E1213 00:23:26.192762 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.193005 kubelet[2863]: E1213 00:23:26.192988 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.193005 kubelet[2863]: W1213 00:23:26.193000 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.193071 kubelet[2863]: E1213 00:23:26.193009 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.193225 kubelet[2863]: E1213 00:23:26.193205 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.193225 kubelet[2863]: W1213 00:23:26.193215 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.193225 kubelet[2863]: E1213 00:23:26.193223 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.193435 kubelet[2863]: E1213 00:23:26.193420 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.193435 kubelet[2863]: W1213 00:23:26.193430 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.193501 kubelet[2863]: E1213 00:23:26.193438 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.203032 kubelet[2863]: E1213 00:23:26.202985 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.203032 kubelet[2863]: W1213 00:23:26.203010 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.203032 kubelet[2863]: E1213 00:23:26.203030 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.203528 kubelet[2863]: E1213 00:23:26.203377 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.203528 kubelet[2863]: W1213 00:23:26.203402 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.203528 kubelet[2863]: E1213 00:23:26.203421 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.203869 kubelet[2863]: E1213 00:23:26.203837 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.203869 kubelet[2863]: W1213 00:23:26.203863 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.203929 kubelet[2863]: E1213 00:23:26.203881 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.204296 kubelet[2863]: E1213 00:23:26.204271 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.204296 kubelet[2863]: W1213 00:23:26.204288 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.204296 kubelet[2863]: E1213 00:23:26.204298 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.204639 kubelet[2863]: E1213 00:23:26.204615 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.204639 kubelet[2863]: W1213 00:23:26.204629 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.204639 kubelet[2863]: E1213 00:23:26.204639 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.204844 kubelet[2863]: E1213 00:23:26.204820 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.204844 kubelet[2863]: W1213 00:23:26.204835 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.204844 kubelet[2863]: E1213 00:23:26.204842 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.205132 kubelet[2863]: E1213 00:23:26.205104 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.205132 kubelet[2863]: W1213 00:23:26.205121 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.205132 kubelet[2863]: E1213 00:23:26.205132 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.205386 kubelet[2863]: E1213 00:23:26.205362 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.205386 kubelet[2863]: W1213 00:23:26.205378 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.205386 kubelet[2863]: E1213 00:23:26.205386 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.205674 kubelet[2863]: E1213 00:23:26.205649 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.205674 kubelet[2863]: W1213 00:23:26.205663 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.205674 kubelet[2863]: E1213 00:23:26.205671 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.206081 kubelet[2863]: E1213 00:23:26.206046 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.206081 kubelet[2863]: W1213 00:23:26.206075 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.206151 kubelet[2863]: E1213 00:23:26.206096 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.207258 kubelet[2863]: E1213 00:23:26.206394 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.207258 kubelet[2863]: W1213 00:23:26.206406 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.207258 kubelet[2863]: E1213 00:23:26.206416 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.207258 kubelet[2863]: E1213 00:23:26.206651 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.207258 kubelet[2863]: W1213 00:23:26.206659 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.207258 kubelet[2863]: E1213 00:23:26.206668 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.207258 kubelet[2863]: E1213 00:23:26.206907 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.207258 kubelet[2863]: W1213 00:23:26.206916 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.207258 kubelet[2863]: E1213 00:23:26.206925 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.207258 kubelet[2863]: E1213 00:23:26.207167 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.207506 kubelet[2863]: W1213 00:23:26.207175 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.207506 kubelet[2863]: E1213 00:23:26.207185 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.207506 kubelet[2863]: E1213 00:23:26.207436 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.207506 kubelet[2863]: W1213 00:23:26.207445 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.207506 kubelet[2863]: E1213 00:23:26.207454 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.207705 kubelet[2863]: E1213 00:23:26.207685 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.207705 kubelet[2863]: W1213 00:23:26.207700 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.207772 kubelet[2863]: E1213 00:23:26.207709 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.208213 kubelet[2863]: E1213 00:23:26.208173 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.208213 kubelet[2863]: W1213 00:23:26.208201 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.208301 kubelet[2863]: E1213 00:23:26.208225 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.208491 kubelet[2863]: E1213 00:23:26.208474 2863 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:23:26.208491 kubelet[2863]: W1213 00:23:26.208486 2863 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:23:26.208569 kubelet[2863]: E1213 00:23:26.208495 2863 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:23:26.232000 audit: BPF prog-id=170 op=LOAD Dec 13 00:23:26.232000 audit[3542]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3415 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:26.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135666365363262313165366436366130626661373364333363306433 Dec 13 00:23:26.233000 audit: BPF prog-id=171 op=LOAD Dec 13 00:23:26.233000 audit[3542]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3415 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:26.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135666365363262313165366436366130626661373364333363306433 Dec 13 00:23:26.233000 audit: BPF prog-id=171 op=UNLOAD Dec 13 00:23:26.233000 audit[3542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:26.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135666365363262313165366436366130626661373364333363306433 Dec 13 00:23:26.233000 audit: BPF prog-id=170 op=UNLOAD Dec 13 00:23:26.233000 audit[3542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:26.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135666365363262313165366436366130626661373364333363306433 Dec 13 00:23:26.233000 audit: BPF prog-id=172 op=LOAD Dec 13 00:23:26.233000 audit[3542]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3415 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:26.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135666365363262313165366436366130626661373364333363306433 Dec 13 00:23:26.267137 systemd[1]: cri-containerd-15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305.scope: Deactivated successfully. Dec 13 00:23:26.271000 audit: BPF prog-id=172 op=UNLOAD Dec 13 00:23:26.276653 containerd[1654]: time="2025-12-13T00:23:26.276539430Z" level=info msg="received container exit event container_id:\"15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305\" id:\"15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305\" pid:3555 exited_at:{seconds:1765585406 nanos:268984521}" Dec 13 00:23:26.278784 containerd[1654]: time="2025-12-13T00:23:26.278730128Z" level=info msg="StartContainer for \"15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305\" returns successfully" Dec 13 00:23:26.308388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15fce62b11e6d66a0bfa73d33c0d3f9241ac9b09cd27295a601c1dcd00273305-rootfs.mount: Deactivated successfully. Dec 13 00:23:27.017489 kubelet[2863]: E1213 00:23:27.017408 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:27.118761 kubelet[2863]: E1213 00:23:27.118704 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:27.119601 containerd[1654]: time="2025-12-13T00:23:27.119567469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 13 00:23:29.017480 kubelet[2863]: E1213 00:23:29.017400 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:30.120023 kubelet[2863]: I1213 00:23:30.119967 2863 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:23:30.120618 kubelet[2863]: E1213 00:23:30.120379 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:30.124715 kubelet[2863]: E1213 00:23:30.124680 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:30.158000 audit[3636]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:30.160575 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 00:23:30.160721 kernel: audit: type=1325 audit(1765585410.158:567): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:30.158000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb877f6e0 a2=0 a3=7fffb877f6cc items=0 ppid=2976 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.168426 containerd[1654]: time="2025-12-13T00:23:30.168371232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:30.170939 containerd[1654]: time="2025-12-13T00:23:30.170901026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 13 00:23:30.172382 containerd[1654]: time="2025-12-13T00:23:30.172353054Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:30.172585 kernel: audit: type=1300 audit(1765585410.158:567): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb877f6e0 a2=0 a3=7fffb877f6cc items=0 ppid=2976 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:30.175793 kernel: audit: type=1327 audit(1765585410.158:567): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:30.175876 kernel: audit: type=1325 audit(1765585410.167:568): table=nat:116 family=2 entries=19 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:30.167000 audit[3636]: NETFILTER_CFG table=nat:116 family=2 entries=19 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:30.177211 containerd[1654]: time="2025-12-13T00:23:30.177174926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:30.177906 containerd[1654]: time="2025-12-13T00:23:30.177867928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.058257198s" Dec 13 00:23:30.177967 containerd[1654]: time="2025-12-13T00:23:30.177911841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 13 00:23:30.179157 kernel: audit: type=1300 audit(1765585410.167:568): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffb877f6e0 a2=0 a3=7fffb877f6cc items=0 ppid=2976 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.167000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffb877f6e0 a2=0 a3=7fffb877f6cc items=0 ppid=2976 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:30.188801 kernel: audit: type=1327 audit(1765585410.167:568): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:30.196786 containerd[1654]: time="2025-12-13T00:23:30.196720126Z" level=info msg="CreateContainer within sandbox \"682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 00:23:30.213617 containerd[1654]: time="2025-12-13T00:23:30.213563188Z" level=info msg="Container 77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:23:30.224758 containerd[1654]: time="2025-12-13T00:23:30.224684649Z" level=info msg="CreateContainer within sandbox \"682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95\"" Dec 13 00:23:30.225397 containerd[1654]: time="2025-12-13T00:23:30.225346823Z" level=info msg="StartContainer for \"77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95\"" Dec 13 00:23:30.226976 containerd[1654]: time="2025-12-13T00:23:30.226939917Z" level=info msg="connecting to shim 77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95" address="unix:///run/containerd/s/fcdb30364c17bae0bc50d94f75cb9c9cdc68e4a9f09364dd4d0a525b7ef960be" protocol=ttrpc version=3 Dec 13 00:23:30.250421 systemd[1]: Started cri-containerd-77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95.scope - libcontainer container 77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95. Dec 13 00:23:30.335000 audit: BPF prog-id=173 op=LOAD Dec 13 00:23:30.335000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3415 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.343029 kernel: audit: type=1334 audit(1765585410.335:569): prog-id=173 op=LOAD Dec 13 00:23:30.343150 kernel: audit: type=1300 audit(1765585410.335:569): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3415 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.343204 kernel: audit: type=1327 audit(1765585410.335:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653030396165623666633364313164373437626361646339363161 Dec 13 00:23:30.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653030396165623666633364313164373437626361646339363161 Dec 13 00:23:30.335000 audit: BPF prog-id=174 op=LOAD Dec 13 00:23:30.349950 kernel: audit: type=1334 audit(1765585410.335:570): prog-id=174 op=LOAD Dec 13 00:23:30.335000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3415 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653030396165623666633364313164373437626361646339363161 Dec 13 00:23:30.335000 audit: BPF prog-id=174 op=UNLOAD Dec 13 00:23:30.335000 audit[3637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653030396165623666633364313164373437626361646339363161 Dec 13 00:23:30.335000 audit: BPF prog-id=173 op=UNLOAD Dec 13 00:23:30.335000 audit[3637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653030396165623666633364313164373437626361646339363161 Dec 13 00:23:30.335000 audit: BPF prog-id=175 op=LOAD Dec 13 00:23:30.335000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3415 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:30.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653030396165623666633364313164373437626361646339363161 Dec 13 00:23:30.370567 containerd[1654]: time="2025-12-13T00:23:30.370415624Z" level=info msg="StartContainer for \"77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95\" returns successfully" Dec 13 00:23:31.017268 kubelet[2863]: E1213 00:23:31.017166 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:31.130149 kubelet[2863]: E1213 00:23:31.130108 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:32.131992 kubelet[2863]: E1213 00:23:32.131942 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:32.165065 containerd[1654]: time="2025-12-13T00:23:32.165016953Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 00:23:32.179904 systemd[1]: cri-containerd-77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95.scope: Deactivated successfully. Dec 13 00:23:32.180608 systemd[1]: cri-containerd-77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95.scope: Consumed 691ms CPU time, 179.8M memory peak, 2.7M read from disk, 171.3M written to disk. Dec 13 00:23:32.182618 containerd[1654]: time="2025-12-13T00:23:32.182563086Z" level=info msg="received container exit event container_id:\"77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95\" id:\"77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95\" pid:3650 exited_at:{seconds:1765585412 nanos:182032549}" Dec 13 00:23:32.185000 audit: BPF prog-id=175 op=UNLOAD Dec 13 00:23:32.203225 kubelet[2863]: I1213 00:23:32.203175 2863 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 13 00:23:32.211490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77e009aeb6fc3d11d747bcadc961a93252e5572be08001d552c6ac809d778f95-rootfs.mount: Deactivated successfully. Dec 13 00:23:32.520246 systemd[1]: Created slice kubepods-burstable-podb1c23d96_704b_4a74_9865_eaafccfb9bb7.slice - libcontainer container kubepods-burstable-podb1c23d96_704b_4a74_9865_eaafccfb9bb7.slice. Dec 13 00:23:32.531420 systemd[1]: Created slice kubepods-burstable-pod4b0ba1ca_25ba_459c_a9ca_af094fdfd26e.slice - libcontainer container kubepods-burstable-pod4b0ba1ca_25ba_459c_a9ca_af094fdfd26e.slice. Dec 13 00:23:32.542207 systemd[1]: Created slice kubepods-besteffort-pod2276dd1e_f4c8_4649_b959_dbfab02532d1.slice - libcontainer container kubepods-besteffort-pod2276dd1e_f4c8_4649_b959_dbfab02532d1.slice. Dec 13 00:23:32.550076 systemd[1]: Created slice kubepods-besteffort-pod3bbc635a_68f2_4b21_9037_215dfb791b81.slice - libcontainer container kubepods-besteffort-pod3bbc635a_68f2_4b21_9037_215dfb791b81.slice. Dec 13 00:23:32.550554 kubelet[2863]: I1213 00:23:32.550188 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88fgw\" (UniqueName: \"kubernetes.io/projected/1fd617ff-92d1-4ae1-9f14-72f718d2a63a-kube-api-access-88fgw\") pod \"goldmane-7c778bb748-8shpw\" (UID: \"1fd617ff-92d1-4ae1-9f14-72f718d2a63a\") " pod="calico-system/goldmane-7c778bb748-8shpw" Dec 13 00:23:32.550554 kubelet[2863]: I1213 00:23:32.550221 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wc9b\" (UniqueName: \"kubernetes.io/projected/2276dd1e-f4c8-4649-b959-dbfab02532d1-kube-api-access-5wc9b\") pod \"calico-apiserver-59b87945cd-vf98m\" (UID: \"2276dd1e-f4c8-4649-b959-dbfab02532d1\") " pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" Dec 13 00:23:32.550554 kubelet[2863]: I1213 00:23:32.550279 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/af29f9bb-907f-43a0-91d7-4904c3687176-calico-apiserver-certs\") pod \"calico-apiserver-59b87945cd-4dq42\" (UID: \"af29f9bb-907f-43a0-91d7-4904c3687176\") " pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" Dec 13 00:23:32.550554 kubelet[2863]: I1213 00:23:32.550301 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtch7\" (UniqueName: \"kubernetes.io/projected/b1c23d96-704b-4a74-9865-eaafccfb9bb7-kube-api-access-vtch7\") pod \"coredns-66bc5c9577-685xl\" (UID: \"b1c23d96-704b-4a74-9865-eaafccfb9bb7\") " pod="kube-system/coredns-66bc5c9577-685xl" Dec 13 00:23:32.550554 kubelet[2863]: I1213 00:23:32.550325 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4vgn\" (UniqueName: \"kubernetes.io/projected/5589a830-43af-418a-8a78-170c45c22ba3-kube-api-access-b4vgn\") pod \"whisker-55ccf4bb4d-7qj56\" (UID: \"5589a830-43af-418a-8a78-170c45c22ba3\") " pod="calico-system/whisker-55ccf4bb4d-7qj56" Dec 13 00:23:32.550811 kubelet[2863]: I1213 00:23:32.550347 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1c23d96-704b-4a74-9865-eaafccfb9bb7-config-volume\") pod \"coredns-66bc5c9577-685xl\" (UID: \"b1c23d96-704b-4a74-9865-eaafccfb9bb7\") " pod="kube-system/coredns-66bc5c9577-685xl" Dec 13 00:23:32.550811 kubelet[2863]: I1213 00:23:32.550370 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnmm\" (UniqueName: \"kubernetes.io/projected/3bbc635a-68f2-4b21-9037-215dfb791b81-kube-api-access-2gnmm\") pod \"calico-kube-controllers-54554ff9b-xwvbk\" (UID: \"3bbc635a-68f2-4b21-9037-215dfb791b81\") " pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" Dec 13 00:23:32.550811 kubelet[2863]: I1213 00:23:32.550401 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fd617ff-92d1-4ae1-9f14-72f718d2a63a-config\") pod \"goldmane-7c778bb748-8shpw\" (UID: \"1fd617ff-92d1-4ae1-9f14-72f718d2a63a\") " pod="calico-system/goldmane-7c778bb748-8shpw" Dec 13 00:23:32.550811 kubelet[2863]: I1213 00:23:32.550423 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc4lw\" (UniqueName: \"kubernetes.io/projected/af29f9bb-907f-43a0-91d7-4904c3687176-kube-api-access-xc4lw\") pod \"calico-apiserver-59b87945cd-4dq42\" (UID: \"af29f9bb-907f-43a0-91d7-4904c3687176\") " pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" Dec 13 00:23:32.550811 kubelet[2863]: I1213 00:23:32.550456 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fd617ff-92d1-4ae1-9f14-72f718d2a63a-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-8shpw\" (UID: \"1fd617ff-92d1-4ae1-9f14-72f718d2a63a\") " pod="calico-system/goldmane-7c778bb748-8shpw" Dec 13 00:23:32.550982 kubelet[2863]: I1213 00:23:32.550490 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2276dd1e-f4c8-4649-b959-dbfab02532d1-calico-apiserver-certs\") pod \"calico-apiserver-59b87945cd-vf98m\" (UID: \"2276dd1e-f4c8-4649-b959-dbfab02532d1\") " pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" Dec 13 00:23:32.550982 kubelet[2863]: I1213 00:23:32.550509 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5589a830-43af-418a-8a78-170c45c22ba3-whisker-backend-key-pair\") pod \"whisker-55ccf4bb4d-7qj56\" (UID: \"5589a830-43af-418a-8a78-170c45c22ba3\") " pod="calico-system/whisker-55ccf4bb4d-7qj56" Dec 13 00:23:32.550982 kubelet[2863]: I1213 00:23:32.550531 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5589a830-43af-418a-8a78-170c45c22ba3-whisker-ca-bundle\") pod \"whisker-55ccf4bb4d-7qj56\" (UID: \"5589a830-43af-418a-8a78-170c45c22ba3\") " pod="calico-system/whisker-55ccf4bb4d-7qj56" Dec 13 00:23:32.550982 kubelet[2863]: I1213 00:23:32.550551 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bbc635a-68f2-4b21-9037-215dfb791b81-tigera-ca-bundle\") pod \"calico-kube-controllers-54554ff9b-xwvbk\" (UID: \"3bbc635a-68f2-4b21-9037-215dfb791b81\") " pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" Dec 13 00:23:32.550982 kubelet[2863]: I1213 00:23:32.550572 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b0ba1ca-25ba-459c-a9ca-af094fdfd26e-config-volume\") pod \"coredns-66bc5c9577-g4dxz\" (UID: \"4b0ba1ca-25ba-459c-a9ca-af094fdfd26e\") " pod="kube-system/coredns-66bc5c9577-g4dxz" Dec 13 00:23:32.551110 kubelet[2863]: I1213 00:23:32.550596 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1fd617ff-92d1-4ae1-9f14-72f718d2a63a-goldmane-key-pair\") pod \"goldmane-7c778bb748-8shpw\" (UID: \"1fd617ff-92d1-4ae1-9f14-72f718d2a63a\") " pod="calico-system/goldmane-7c778bb748-8shpw" Dec 13 00:23:32.551110 kubelet[2863]: I1213 00:23:32.550629 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmkh9\" (UniqueName: \"kubernetes.io/projected/4b0ba1ca-25ba-459c-a9ca-af094fdfd26e-kube-api-access-hmkh9\") pod \"coredns-66bc5c9577-g4dxz\" (UID: \"4b0ba1ca-25ba-459c-a9ca-af094fdfd26e\") " pod="kube-system/coredns-66bc5c9577-g4dxz" Dec 13 00:23:32.558000 systemd[1]: Created slice kubepods-besteffort-podaf29f9bb_907f_43a0_91d7_4904c3687176.slice - libcontainer container kubepods-besteffort-podaf29f9bb_907f_43a0_91d7_4904c3687176.slice. Dec 13 00:23:32.567086 systemd[1]: Created slice kubepods-besteffort-pod5589a830_43af_418a_8a78_170c45c22ba3.slice - libcontainer container kubepods-besteffort-pod5589a830_43af_418a_8a78_170c45c22ba3.slice. Dec 13 00:23:32.573469 systemd[1]: Created slice kubepods-besteffort-pod1fd617ff_92d1_4ae1_9f14_72f718d2a63a.slice - libcontainer container kubepods-besteffort-pod1fd617ff_92d1_4ae1_9f14_72f718d2a63a.slice. Dec 13 00:23:32.830727 kubelet[2863]: E1213 00:23:32.830561 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:32.831307 containerd[1654]: time="2025-12-13T00:23:32.831215659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-685xl,Uid:b1c23d96-704b-4a74-9865-eaafccfb9bb7,Namespace:kube-system,Attempt:0,}" Dec 13 00:23:32.840418 kubelet[2863]: E1213 00:23:32.840366 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:32.841540 containerd[1654]: time="2025-12-13T00:23:32.841498302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4dxz,Uid:4b0ba1ca-25ba-459c-a9ca-af094fdfd26e,Namespace:kube-system,Attempt:0,}" Dec 13 00:23:32.848951 containerd[1654]: time="2025-12-13T00:23:32.848509586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b87945cd-vf98m,Uid:2276dd1e-f4c8-4649-b959-dbfab02532d1,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:23:32.857373 containerd[1654]: time="2025-12-13T00:23:32.857315885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54554ff9b-xwvbk,Uid:3bbc635a-68f2-4b21-9037-215dfb791b81,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:32.865899 containerd[1654]: time="2025-12-13T00:23:32.865835865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b87945cd-4dq42,Uid:af29f9bb-907f-43a0-91d7-4904c3687176,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:23:32.874972 containerd[1654]: time="2025-12-13T00:23:32.874913123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55ccf4bb4d-7qj56,Uid:5589a830-43af-418a-8a78-170c45c22ba3,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:32.882964 containerd[1654]: time="2025-12-13T00:23:32.882835340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8shpw,Uid:1fd617ff-92d1-4ae1-9f14-72f718d2a63a,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:32.981982 containerd[1654]: time="2025-12-13T00:23:32.981726531Z" level=error msg="Failed to destroy network for sandbox \"b6fdc46428b2728446e2377fffc5e31e67acb11ee1ee18753e56ca2ecb85c206\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:32.985895 containerd[1654]: time="2025-12-13T00:23:32.985814794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-685xl,Uid:b1c23d96-704b-4a74-9865-eaafccfb9bb7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fdc46428b2728446e2377fffc5e31e67acb11ee1ee18753e56ca2ecb85c206\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:32.987082 kubelet[2863]: E1213 00:23:32.987023 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fdc46428b2728446e2377fffc5e31e67acb11ee1ee18753e56ca2ecb85c206\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:32.987349 kubelet[2863]: E1213 00:23:32.987322 2863 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fdc46428b2728446e2377fffc5e31e67acb11ee1ee18753e56ca2ecb85c206\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-685xl" Dec 13 00:23:32.987445 kubelet[2863]: E1213 00:23:32.987417 2863 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fdc46428b2728446e2377fffc5e31e67acb11ee1ee18753e56ca2ecb85c206\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-685xl" Dec 13 00:23:32.987829 kubelet[2863]: E1213 00:23:32.987592 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-685xl_kube-system(b1c23d96-704b-4a74-9865-eaafccfb9bb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-685xl_kube-system(b1c23d96-704b-4a74-9865-eaafccfb9bb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6fdc46428b2728446e2377fffc5e31e67acb11ee1ee18753e56ca2ecb85c206\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-685xl" podUID="b1c23d96-704b-4a74-9865-eaafccfb9bb7" Dec 13 00:23:32.992589 containerd[1654]: time="2025-12-13T00:23:32.992412251Z" level=error msg="Failed to destroy network for sandbox \"47d7941ba3aae64a214e39c687b2a38a90251cdcf3352a97a81deef23849683f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:32.995962 containerd[1654]: time="2025-12-13T00:23:32.995638073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4dxz,Uid:4b0ba1ca-25ba-459c-a9ca-af094fdfd26e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d7941ba3aae64a214e39c687b2a38a90251cdcf3352a97a81deef23849683f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:32.997166 kubelet[2863]: E1213 00:23:32.996636 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d7941ba3aae64a214e39c687b2a38a90251cdcf3352a97a81deef23849683f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:32.997166 kubelet[2863]: E1213 00:23:32.996717 2863 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d7941ba3aae64a214e39c687b2a38a90251cdcf3352a97a81deef23849683f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g4dxz" Dec 13 00:23:32.997166 kubelet[2863]: E1213 00:23:32.996742 2863 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d7941ba3aae64a214e39c687b2a38a90251cdcf3352a97a81deef23849683f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g4dxz" Dec 13 00:23:32.997418 kubelet[2863]: E1213 00:23:32.996817 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-g4dxz_kube-system(4b0ba1ca-25ba-459c-a9ca-af094fdfd26e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-g4dxz_kube-system(4b0ba1ca-25ba-459c-a9ca-af094fdfd26e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47d7941ba3aae64a214e39c687b2a38a90251cdcf3352a97a81deef23849683f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-g4dxz" podUID="4b0ba1ca-25ba-459c-a9ca-af094fdfd26e" Dec 13 00:23:33.005747 containerd[1654]: time="2025-12-13T00:23:33.005678610Z" level=error msg="Failed to destroy network for sandbox \"96f24fa49f5a88e4abb34b1b58564828c21fe2c2ee014937feb103f691298a95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.008518 containerd[1654]: time="2025-12-13T00:23:33.008489802Z" level=error msg="Failed to destroy network for sandbox \"d56d3931b1d318fb1c1ba5e165bcf0223e8c3c952c1e57676c3b9c9d6010b2fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.010989 containerd[1654]: time="2025-12-13T00:23:33.010865476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54554ff9b-xwvbk,Uid:3bbc635a-68f2-4b21-9037-215dfb791b81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56d3931b1d318fb1c1ba5e165bcf0223e8c3c952c1e57676c3b9c9d6010b2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.011850 kubelet[2863]: E1213 00:23:33.011763 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56d3931b1d318fb1c1ba5e165bcf0223e8c3c952c1e57676c3b9c9d6010b2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.012049 kubelet[2863]: E1213 00:23:33.011856 2863 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56d3931b1d318fb1c1ba5e165bcf0223e8c3c952c1e57676c3b9c9d6010b2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" Dec 13 00:23:33.012049 kubelet[2863]: E1213 00:23:33.011881 2863 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56d3931b1d318fb1c1ba5e165bcf0223e8c3c952c1e57676c3b9c9d6010b2fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" Dec 13 00:23:33.012049 kubelet[2863]: E1213 00:23:33.011962 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54554ff9b-xwvbk_calico-system(3bbc635a-68f2-4b21-9037-215dfb791b81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54554ff9b-xwvbk_calico-system(3bbc635a-68f2-4b21-9037-215dfb791b81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d56d3931b1d318fb1c1ba5e165bcf0223e8c3c952c1e57676c3b9c9d6010b2fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" podUID="3bbc635a-68f2-4b21-9037-215dfb791b81" Dec 13 00:23:33.014033 containerd[1654]: time="2025-12-13T00:23:33.013960613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55ccf4bb4d-7qj56,Uid:5589a830-43af-418a-8a78-170c45c22ba3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f24fa49f5a88e4abb34b1b58564828c21fe2c2ee014937feb103f691298a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.014567 kubelet[2863]: E1213 00:23:33.014219 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f24fa49f5a88e4abb34b1b58564828c21fe2c2ee014937feb103f691298a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.014567 kubelet[2863]: E1213 00:23:33.014455 2863 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f24fa49f5a88e4abb34b1b58564828c21fe2c2ee014937feb103f691298a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55ccf4bb4d-7qj56" Dec 13 00:23:33.014567 kubelet[2863]: E1213 00:23:33.014512 2863 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f24fa49f5a88e4abb34b1b58564828c21fe2c2ee014937feb103f691298a95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55ccf4bb4d-7qj56" Dec 13 00:23:33.014873 kubelet[2863]: E1213 00:23:33.014763 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55ccf4bb4d-7qj56_calico-system(5589a830-43af-418a-8a78-170c45c22ba3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55ccf4bb4d-7qj56_calico-system(5589a830-43af-418a-8a78-170c45c22ba3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96f24fa49f5a88e4abb34b1b58564828c21fe2c2ee014937feb103f691298a95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55ccf4bb4d-7qj56" podUID="5589a830-43af-418a-8a78-170c45c22ba3" Dec 13 00:23:33.016656 containerd[1654]: time="2025-12-13T00:23:33.016613628Z" level=error msg="Failed to destroy network for sandbox \"37de4135349268ab2bac99dd6fc0edca3571b256f177464658e1cacbdd69adcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.021019 containerd[1654]: time="2025-12-13T00:23:33.020971767Z" level=error msg="Failed to destroy network for sandbox \"9d3832a7faa6139bd6ace1252240af93138dc728caff82b014159c0ac2703f64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.021925 containerd[1654]: time="2025-12-13T00:23:33.021800745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b87945cd-vf98m,Uid:2276dd1e-f4c8-4649-b959-dbfab02532d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37de4135349268ab2bac99dd6fc0edca3571b256f177464658e1cacbdd69adcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.022476 kubelet[2863]: E1213 00:23:33.022420 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37de4135349268ab2bac99dd6fc0edca3571b256f177464658e1cacbdd69adcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.022572 kubelet[2863]: E1213 00:23:33.022505 2863 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37de4135349268ab2bac99dd6fc0edca3571b256f177464658e1cacbdd69adcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" Dec 13 00:23:33.022572 kubelet[2863]: E1213 00:23:33.022534 2863 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37de4135349268ab2bac99dd6fc0edca3571b256f177464658e1cacbdd69adcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" Dec 13 00:23:33.022648 kubelet[2863]: E1213 00:23:33.022603 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59b87945cd-vf98m_calico-apiserver(2276dd1e-f4c8-4649-b959-dbfab02532d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59b87945cd-vf98m_calico-apiserver(2276dd1e-f4c8-4649-b959-dbfab02532d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37de4135349268ab2bac99dd6fc0edca3571b256f177464658e1cacbdd69adcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" podUID="2276dd1e-f4c8-4649-b959-dbfab02532d1" Dec 13 00:23:33.024887 containerd[1654]: time="2025-12-13T00:23:33.024815941Z" level=error msg="Failed to destroy network for sandbox \"8541cdc399de443f1ff6fc71058df28d14e2efd8f4d7b4305750ad93b786bc26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.025225 containerd[1654]: time="2025-12-13T00:23:33.025176519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8shpw,Uid:1fd617ff-92d1-4ae1-9f14-72f718d2a63a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d3832a7faa6139bd6ace1252240af93138dc728caff82b014159c0ac2703f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.025544 kubelet[2863]: E1213 00:23:33.025483 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d3832a7faa6139bd6ace1252240af93138dc728caff82b014159c0ac2703f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.025611 kubelet[2863]: E1213 00:23:33.025560 2863 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d3832a7faa6139bd6ace1252240af93138dc728caff82b014159c0ac2703f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8shpw" Dec 13 00:23:33.025611 kubelet[2863]: E1213 00:23:33.025582 2863 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d3832a7faa6139bd6ace1252240af93138dc728caff82b014159c0ac2703f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8shpw" Dec 13 00:23:33.025687 kubelet[2863]: E1213 00:23:33.025640 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-8shpw_calico-system(1fd617ff-92d1-4ae1-9f14-72f718d2a63a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-8shpw_calico-system(1fd617ff-92d1-4ae1-9f14-72f718d2a63a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d3832a7faa6139bd6ace1252240af93138dc728caff82b014159c0ac2703f64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-8shpw" podUID="1fd617ff-92d1-4ae1-9f14-72f718d2a63a" Dec 13 00:23:33.027390 containerd[1654]: time="2025-12-13T00:23:33.027330716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b87945cd-4dq42,Uid:af29f9bb-907f-43a0-91d7-4904c3687176,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8541cdc399de443f1ff6fc71058df28d14e2efd8f4d7b4305750ad93b786bc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.028455 kubelet[2863]: E1213 00:23:33.028398 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8541cdc399de443f1ff6fc71058df28d14e2efd8f4d7b4305750ad93b786bc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.028564 kubelet[2863]: E1213 00:23:33.028471 2863 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8541cdc399de443f1ff6fc71058df28d14e2efd8f4d7b4305750ad93b786bc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" Dec 13 00:23:33.028564 kubelet[2863]: E1213 00:23:33.028492 2863 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8541cdc399de443f1ff6fc71058df28d14e2efd8f4d7b4305750ad93b786bc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" Dec 13 00:23:33.028564 kubelet[2863]: E1213 00:23:33.028541 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59b87945cd-4dq42_calico-apiserver(af29f9bb-907f-43a0-91d7-4904c3687176)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59b87945cd-4dq42_calico-apiserver(af29f9bb-907f-43a0-91d7-4904c3687176)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8541cdc399de443f1ff6fc71058df28d14e2efd8f4d7b4305750ad93b786bc26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" podUID="af29f9bb-907f-43a0-91d7-4904c3687176" Dec 13 00:23:33.029837 systemd[1]: Created slice kubepods-besteffort-pod43eaf899_3f04_44ec_95d7_4d02448959a8.slice - libcontainer container kubepods-besteffort-pod43eaf899_3f04_44ec_95d7_4d02448959a8.slice. Dec 13 00:23:33.034734 containerd[1654]: time="2025-12-13T00:23:33.034688333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tp27z,Uid:43eaf899-3f04-44ec-95d7-4d02448959a8,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:33.101173 containerd[1654]: time="2025-12-13T00:23:33.101025636Z" level=error msg="Failed to destroy network for sandbox \"b73e5aacb18059557d4a067046cbeffc57fd09341a432ce90bd1f243d23a157a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.103921 containerd[1654]: time="2025-12-13T00:23:33.103870502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tp27z,Uid:43eaf899-3f04-44ec-95d7-4d02448959a8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73e5aacb18059557d4a067046cbeffc57fd09341a432ce90bd1f243d23a157a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.104224 kubelet[2863]: E1213 00:23:33.104175 2863 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73e5aacb18059557d4a067046cbeffc57fd09341a432ce90bd1f243d23a157a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:23:33.104299 kubelet[2863]: E1213 00:23:33.104259 2863 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73e5aacb18059557d4a067046cbeffc57fd09341a432ce90bd1f243d23a157a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tp27z" Dec 13 00:23:33.104346 kubelet[2863]: E1213 00:23:33.104298 2863 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73e5aacb18059557d4a067046cbeffc57fd09341a432ce90bd1f243d23a157a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tp27z" Dec 13 00:23:33.104377 kubelet[2863]: E1213 00:23:33.104359 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tp27z_calico-system(43eaf899-3f04-44ec-95d7-4d02448959a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tp27z_calico-system(43eaf899-3f04-44ec-95d7-4d02448959a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b73e5aacb18059557d4a067046cbeffc57fd09341a432ce90bd1f243d23a157a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:33.138189 kubelet[2863]: E1213 00:23:33.138129 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:33.139166 containerd[1654]: time="2025-12-13T00:23:33.139115110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 13 00:23:38.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.91:22-10.0.0.1:39578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:38.084176 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:39578.service - OpenSSH per-connection server daemon (10.0.0.1:39578). Dec 13 00:23:38.085596 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 13 00:23:38.085728 kernel: audit: type=1130 audit(1765585418.084:575): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.91:22-10.0.0.1:39578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:38.664000 audit[3960]: USER_ACCT pid=3960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.665287 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 39578 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:23:38.668764 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:38.670283 kernel: audit: type=1101 audit(1765585418.664:576): pid=3960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.675281 kernel: audit: type=1103 audit(1765585418.666:577): pid=3960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.666000 audit[3960]: CRED_ACQ pid=3960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.684292 kernel: audit: type=1006 audit(1765585418.666:578): pid=3960 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 00:23:38.684395 kernel: audit: type=1300 audit(1765585418.666:578): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfff36140 a2=3 a3=0 items=0 ppid=1 pid=3960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:38.666000 audit[3960]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfff36140 a2=3 a3=0 items=0 ppid=1 pid=3960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:38.679505 systemd-logind[1637]: New session 9 of user core. Dec 13 00:23:38.666000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:38.687211 kernel: audit: type=1327 audit(1765585418.666:578): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:38.690594 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 00:23:38.693000 audit[3960]: USER_START pid=3960 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.705348 kernel: audit: type=1105 audit(1765585418.693:579): pid=3960 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.705496 kernel: audit: type=1103 audit(1765585418.697:580): pid=3968 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.697000 audit[3968]: CRED_ACQ pid=3968 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.854389 sshd[3968]: Connection closed by 10.0.0.1 port 39578 Dec 13 00:23:38.854766 sshd-session[3960]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:38.856000 audit[3960]: USER_END pid=3960 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.859730 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:39578.service: Deactivated successfully. Dec 13 00:23:38.862304 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 00:23:38.866892 kernel: audit: type=1106 audit(1765585418.856:581): pid=3960 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.867034 kernel: audit: type=1104 audit(1765585418.856:582): pid=3960 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.856000 audit[3960]: CRED_DISP pid=3960 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:38.863934 systemd-logind[1637]: Session 9 logged out. Waiting for processes to exit. Dec 13 00:23:38.865334 systemd-logind[1637]: Removed session 9. Dec 13 00:23:38.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.91:22-10.0.0.1:39578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:40.821776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253569482.mount: Deactivated successfully. Dec 13 00:23:41.560016 containerd[1654]: time="2025-12-13T00:23:41.559944778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:41.562047 containerd[1654]: time="2025-12-13T00:23:41.562017804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 13 00:23:41.565699 containerd[1654]: time="2025-12-13T00:23:41.565666608Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:41.571120 containerd[1654]: time="2025-12-13T00:23:41.571058469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:23:41.571695 containerd[1654]: time="2025-12-13T00:23:41.571638910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.432468906s" Dec 13 00:23:41.571695 containerd[1654]: time="2025-12-13T00:23:41.571681199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 13 00:23:41.593380 containerd[1654]: time="2025-12-13T00:23:41.593325404Z" level=info msg="CreateContainer within sandbox \"682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 00:23:41.603506 containerd[1654]: time="2025-12-13T00:23:41.603457410Z" level=info msg="Container 589af6c7754752f56d905b75a489f513091d868ef4d5c2958dcb944cc1c4970d: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:23:41.618049 containerd[1654]: time="2025-12-13T00:23:41.617991337Z" level=info msg="CreateContainer within sandbox \"682fb9217f93fb164a9fe5eabf552386b80a87e6dca44d2b777b98a652a8e495\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"589af6c7754752f56d905b75a489f513091d868ef4d5c2958dcb944cc1c4970d\"" Dec 13 00:23:41.618590 containerd[1654]: time="2025-12-13T00:23:41.618559685Z" level=info msg="StartContainer for \"589af6c7754752f56d905b75a489f513091d868ef4d5c2958dcb944cc1c4970d\"" Dec 13 00:23:41.620231 containerd[1654]: time="2025-12-13T00:23:41.620173927Z" level=info msg="connecting to shim 589af6c7754752f56d905b75a489f513091d868ef4d5c2958dcb944cc1c4970d" address="unix:///run/containerd/s/fcdb30364c17bae0bc50d94f75cb9c9cdc68e4a9f09364dd4d0a525b7ef960be" protocol=ttrpc version=3 Dec 13 00:23:41.646598 systemd[1]: Started cri-containerd-589af6c7754752f56d905b75a489f513091d868ef4d5c2958dcb944cc1c4970d.scope - libcontainer container 589af6c7754752f56d905b75a489f513091d868ef4d5c2958dcb944cc1c4970d. Dec 13 00:23:41.737000 audit: BPF prog-id=176 op=LOAD Dec 13 00:23:41.737000 audit[3984]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3415 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:41.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538396166366337373534373532663536643930356237356134383966 Dec 13 00:23:41.737000 audit: BPF prog-id=177 op=LOAD Dec 13 00:23:41.737000 audit[3984]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3415 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:41.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538396166366337373534373532663536643930356237356134383966 Dec 13 00:23:41.737000 audit: BPF prog-id=177 op=UNLOAD Dec 13 00:23:41.737000 audit[3984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:41.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538396166366337373534373532663536643930356237356134383966 Dec 13 00:23:41.737000 audit: BPF prog-id=176 op=UNLOAD Dec 13 00:23:41.737000 audit[3984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:41.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538396166366337373534373532663536643930356237356134383966 Dec 13 00:23:41.737000 audit: BPF prog-id=178 op=LOAD Dec 13 00:23:41.737000 audit[3984]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3415 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:41.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538396166366337373534373532663536643930356237356134383966 Dec 13 00:23:41.760014 containerd[1654]: time="2025-12-13T00:23:41.759958815Z" level=info msg="StartContainer for \"589af6c7754752f56d905b75a489f513091d868ef4d5c2958dcb944cc1c4970d\" returns successfully" Dec 13 00:23:41.857637 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 00:23:41.857903 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 00:23:42.326263 kubelet[2863]: I1213 00:23:42.324629 2863 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5589a830-43af-418a-8a78-170c45c22ba3-whisker-ca-bundle\") pod \"5589a830-43af-418a-8a78-170c45c22ba3\" (UID: \"5589a830-43af-418a-8a78-170c45c22ba3\") " Dec 13 00:23:42.326263 kubelet[2863]: I1213 00:23:42.324698 2863 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4vgn\" (UniqueName: \"kubernetes.io/projected/5589a830-43af-418a-8a78-170c45c22ba3-kube-api-access-b4vgn\") pod \"5589a830-43af-418a-8a78-170c45c22ba3\" (UID: \"5589a830-43af-418a-8a78-170c45c22ba3\") " Dec 13 00:23:42.326263 kubelet[2863]: I1213 00:23:42.324731 2863 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5589a830-43af-418a-8a78-170c45c22ba3-whisker-backend-key-pair\") pod \"5589a830-43af-418a-8a78-170c45c22ba3\" (UID: \"5589a830-43af-418a-8a78-170c45c22ba3\") " Dec 13 00:23:42.327295 kubelet[2863]: I1213 00:23:42.327264 2863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5589a830-43af-418a-8a78-170c45c22ba3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5589a830-43af-418a-8a78-170c45c22ba3" (UID: "5589a830-43af-418a-8a78-170c45c22ba3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 13 00:23:42.335522 kubelet[2863]: I1213 00:23:42.335426 2863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5589a830-43af-418a-8a78-170c45c22ba3-kube-api-access-b4vgn" (OuterVolumeSpecName: "kube-api-access-b4vgn") pod "5589a830-43af-418a-8a78-170c45c22ba3" (UID: "5589a830-43af-418a-8a78-170c45c22ba3"). InnerVolumeSpecName "kube-api-access-b4vgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 13 00:23:42.336839 systemd[1]: var-lib-kubelet-pods-5589a830\x2d43af\x2d418a\x2d8a78\x2d170c45c22ba3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db4vgn.mount: Deactivated successfully. Dec 13 00:23:42.345261 systemd[1]: var-lib-kubelet-pods-5589a830\x2d43af\x2d418a\x2d8a78\x2d170c45c22ba3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 13 00:23:42.346449 kubelet[2863]: I1213 00:23:42.345425 2863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5589a830-43af-418a-8a78-170c45c22ba3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5589a830-43af-418a-8a78-170c45c22ba3" (UID: "5589a830-43af-418a-8a78-170c45c22ba3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 13 00:23:42.425698 kubelet[2863]: I1213 00:23:42.425638 2863 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4vgn\" (UniqueName: \"kubernetes.io/projected/5589a830-43af-418a-8a78-170c45c22ba3-kube-api-access-b4vgn\") on node \"localhost\" DevicePath \"\"" Dec 13 00:23:42.425698 kubelet[2863]: I1213 00:23:42.425685 2863 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5589a830-43af-418a-8a78-170c45c22ba3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 13 00:23:42.425698 kubelet[2863]: I1213 00:23:42.425698 2863 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5589a830-43af-418a-8a78-170c45c22ba3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 00:23:42.591304 kubelet[2863]: E1213 00:23:42.590881 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:42.597918 systemd[1]: Removed slice kubepods-besteffort-pod5589a830_43af_418a_8a78_170c45c22ba3.slice - libcontainer container kubepods-besteffort-pod5589a830_43af_418a_8a78_170c45c22ba3.slice. Dec 13 00:23:42.612190 kubelet[2863]: I1213 00:23:42.612108 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nkqp5" podStartSLOduration=2.317664137 podStartE2EDuration="24.612088891s" podCreationTimestamp="2025-12-13 00:23:18 +0000 UTC" firstStartedPulling="2025-12-13 00:23:19.277982068 +0000 UTC m=+21.367254280" lastFinishedPulling="2025-12-13 00:23:41.572406832 +0000 UTC m=+43.661679034" observedRunningTime="2025-12-13 00:23:42.611582149 +0000 UTC m=+44.700854371" watchObservedRunningTime="2025-12-13 00:23:42.612088891 +0000 UTC m=+44.701361113" Dec 13 00:23:42.712227 systemd[1]: Created slice kubepods-besteffort-pode9fa6631_f723_4789_af25_63888ed257d2.slice - libcontainer container kubepods-besteffort-pode9fa6631_f723_4789_af25_63888ed257d2.slice. Dec 13 00:23:42.827567 kubelet[2863]: I1213 00:23:42.827483 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e9fa6631-f723-4789-af25-63888ed257d2-whisker-backend-key-pair\") pod \"whisker-6bb8446dc4-n7rmd\" (UID: \"e9fa6631-f723-4789-af25-63888ed257d2\") " pod="calico-system/whisker-6bb8446dc4-n7rmd" Dec 13 00:23:42.827567 kubelet[2863]: I1213 00:23:42.827547 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltrbx\" (UniqueName: \"kubernetes.io/projected/e9fa6631-f723-4789-af25-63888ed257d2-kube-api-access-ltrbx\") pod \"whisker-6bb8446dc4-n7rmd\" (UID: \"e9fa6631-f723-4789-af25-63888ed257d2\") " pod="calico-system/whisker-6bb8446dc4-n7rmd" Dec 13 00:23:42.827567 kubelet[2863]: I1213 00:23:42.827572 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9fa6631-f723-4789-af25-63888ed257d2-whisker-ca-bundle\") pod \"whisker-6bb8446dc4-n7rmd\" (UID: \"e9fa6631-f723-4789-af25-63888ed257d2\") " pod="calico-system/whisker-6bb8446dc4-n7rmd" Dec 13 00:23:43.018763 containerd[1654]: time="2025-12-13T00:23:43.018692116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb8446dc4-n7rmd,Uid:e9fa6631-f723-4789-af25-63888ed257d2,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:43.207779 systemd-networkd[1316]: cali1f187876d6e: Link UP Dec 13 00:23:43.208172 systemd-networkd[1316]: cali1f187876d6e: Gained carrier Dec 13 00:23:43.224271 containerd[1654]: 2025-12-13 00:23:43.050 [INFO][4076] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:23:43.224271 containerd[1654]: 2025-12-13 00:23:43.073 [INFO][4076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0 whisker-6bb8446dc4- calico-system e9fa6631-f723-4789-af25-63888ed257d2 973 0 2025-12-13 00:23:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bb8446dc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6bb8446dc4-n7rmd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1f187876d6e [] [] }} ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Namespace="calico-system" Pod="whisker-6bb8446dc4-n7rmd" WorkloadEndpoint="localhost-k8s-whisker--6bb8446dc4--n7rmd-" Dec 13 00:23:43.224271 containerd[1654]: 2025-12-13 00:23:43.073 [INFO][4076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Namespace="calico-system" Pod="whisker-6bb8446dc4-n7rmd" WorkloadEndpoint="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" Dec 13 00:23:43.224271 containerd[1654]: 2025-12-13 00:23:43.152 [INFO][4091] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" HandleID="k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Workload="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.153 [INFO][4091] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" HandleID="k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Workload="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0910), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6bb8446dc4-n7rmd", "timestamp":"2025-12-13 00:23:43.152420989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.153 [INFO][4091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.153 [INFO][4091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.153 [INFO][4091] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.163 [INFO][4091] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" host="localhost" Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.170 [INFO][4091] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.179 [INFO][4091] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.181 [INFO][4091] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.183 [INFO][4091] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:43.224646 containerd[1654]: 2025-12-13 00:23:43.183 [INFO][4091] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" host="localhost" Dec 13 00:23:43.224984 containerd[1654]: 2025-12-13 00:23:43.185 [INFO][4091] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0 Dec 13 00:23:43.224984 containerd[1654]: 2025-12-13 00:23:43.189 [INFO][4091] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" host="localhost" Dec 13 00:23:43.224984 containerd[1654]: 2025-12-13 00:23:43.194 [INFO][4091] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" host="localhost" Dec 13 00:23:43.224984 containerd[1654]: 2025-12-13 00:23:43.194 [INFO][4091] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" host="localhost" Dec 13 00:23:43.224984 containerd[1654]: 2025-12-13 00:23:43.194 [INFO][4091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:23:43.224984 containerd[1654]: 2025-12-13 00:23:43.194 [INFO][4091] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" HandleID="k8s-pod-network.514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Workload="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" Dec 13 00:23:43.225201 containerd[1654]: 2025-12-13 00:23:43.198 [INFO][4076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Namespace="calico-system" Pod="whisker-6bb8446dc4-n7rmd" WorkloadEndpoint="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0", GenerateName:"whisker-6bb8446dc4-", Namespace:"calico-system", SelfLink:"", UID:"e9fa6631-f723-4789-af25-63888ed257d2", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bb8446dc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6bb8446dc4-n7rmd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f187876d6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:43.225201 containerd[1654]: 2025-12-13 00:23:43.198 [INFO][4076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Namespace="calico-system" Pod="whisker-6bb8446dc4-n7rmd" WorkloadEndpoint="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" Dec 13 00:23:43.226299 containerd[1654]: 2025-12-13 00:23:43.198 [INFO][4076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f187876d6e ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Namespace="calico-system" Pod="whisker-6bb8446dc4-n7rmd" WorkloadEndpoint="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" Dec 13 00:23:43.226299 containerd[1654]: 2025-12-13 00:23:43.209 [INFO][4076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Namespace="calico-system" Pod="whisker-6bb8446dc4-n7rmd" WorkloadEndpoint="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" Dec 13 00:23:43.226384 containerd[1654]: 2025-12-13 00:23:43.209 [INFO][4076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Namespace="calico-system" Pod="whisker-6bb8446dc4-n7rmd" WorkloadEndpoint="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0", GenerateName:"whisker-6bb8446dc4-", Namespace:"calico-system", SelfLink:"", UID:"e9fa6631-f723-4789-af25-63888ed257d2", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bb8446dc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0", Pod:"whisker-6bb8446dc4-n7rmd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f187876d6e", MAC:"8a:66:e4:c6:de:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:43.226464 containerd[1654]: 2025-12-13 00:23:43.220 [INFO][4076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" Namespace="calico-system" Pod="whisker-6bb8446dc4-n7rmd" WorkloadEndpoint="localhost-k8s-whisker--6bb8446dc4--n7rmd-eth0" Dec 13 00:23:43.458468 containerd[1654]: time="2025-12-13T00:23:43.458203875Z" level=info msg="connecting to shim 514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0" address="unix:///run/containerd/s/646db00589b893871010c2911aeb6cb6e87980dad9eb56577a952abe0fda7b2d" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:43.485540 systemd[1]: Started cri-containerd-514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0.scope - libcontainer container 514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0. Dec 13 00:23:43.575484 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 13 00:23:43.575640 kernel: audit: type=1334 audit(1765585423.571:589): prog-id=179 op=LOAD Dec 13 00:23:43.571000 audit: BPF prog-id=179 op=LOAD Dec 13 00:23:43.577376 kernel: audit: type=1334 audit(1765585423.571:590): prog-id=180 op=LOAD Dec 13 00:23:43.571000 audit: BPF prog-id=180 op=LOAD Dec 13 00:23:43.575884 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:23:43.571000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.590503 kernel: audit: type=1300 audit(1765585423.571:590): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.590591 kernel: audit: type=1327 audit(1765585423.571:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.592560 kernel: audit: type=1334 audit(1765585423.572:591): prog-id=180 op=UNLOAD Dec 13 00:23:43.572000 audit: BPF prog-id=180 op=UNLOAD Dec 13 00:23:43.598676 kernel: audit: type=1300 audit(1765585423.572:591): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.572000 audit[4125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.598880 kubelet[2863]: E1213 00:23:43.594266 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:43.606414 kernel: audit: type=1327 audit(1765585423.572:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.572000 audit: BPF prog-id=181 op=LOAD Dec 13 00:23:43.608331 kernel: audit: type=1334 audit(1765585423.572:592): prog-id=181 op=LOAD Dec 13 00:23:43.572000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.617115 kernel: audit: type=1300 audit(1765585423.572:592): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.628269 kernel: audit: type=1327 audit(1765585423.572:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.572000 audit: BPF prog-id=182 op=LOAD Dec 13 00:23:43.572000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.572000 audit: BPF prog-id=182 op=UNLOAD Dec 13 00:23:43.572000 audit[4125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.572000 audit: BPF prog-id=181 op=UNLOAD Dec 13 00:23:43.572000 audit[4125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.572000 audit: BPF prog-id=183 op=LOAD Dec 13 00:23:43.572000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4114 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531343838386631333138633830663963376566323066623930356161 Dec 13 00:23:43.678514 containerd[1654]: time="2025-12-13T00:23:43.678382237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb8446dc4-n7rmd,Uid:e9fa6631-f723-4789-af25-63888ed257d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"514888f1318c80f9c7ef20fb905aafc2f08192082f9f51b8a7d8ee65203d02f0\"" Dec 13 00:23:43.686322 containerd[1654]: time="2025-12-13T00:23:43.685966626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:23:43.785000 audit: BPF prog-id=184 op=LOAD Dec 13 00:23:43.785000 audit[4298]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4432f0d0 a2=98 a3=1fffffffffffffff items=0 ppid=4156 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.785000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:23:43.785000 audit: BPF prog-id=184 op=UNLOAD Dec 13 00:23:43.785000 audit[4298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc4432f0a0 a3=0 items=0 ppid=4156 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.785000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:23:43.785000 audit: BPF prog-id=185 op=LOAD Dec 13 00:23:43.785000 audit[4298]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4432efb0 a2=94 a3=3 items=0 ppid=4156 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.785000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:23:43.785000 audit: BPF prog-id=185 op=UNLOAD Dec 13 00:23:43.785000 audit[4298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc4432efb0 a2=94 a3=3 items=0 ppid=4156 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.785000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:23:43.785000 audit: BPF prog-id=186 op=LOAD Dec 13 00:23:43.785000 audit[4298]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4432eff0 a2=94 a3=7ffc4432f1d0 items=0 ppid=4156 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.785000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:23:43.785000 audit: BPF prog-id=186 op=UNLOAD Dec 13 00:23:43.785000 audit[4298]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc4432eff0 a2=94 a3=7ffc4432f1d0 items=0 ppid=4156 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.785000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:23:43.788000 audit: BPF prog-id=187 op=LOAD Dec 13 00:23:43.788000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe61cb5060 a2=98 a3=3 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.788000 audit: BPF prog-id=187 op=UNLOAD Dec 13 00:23:43.788000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe61cb5030 a3=0 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.788000 audit: BPF prog-id=188 op=LOAD Dec 13 00:23:43.788000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe61cb4e50 a2=94 a3=54428f items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.788000 audit: BPF prog-id=188 op=UNLOAD Dec 13 00:23:43.788000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe61cb4e50 a2=94 a3=54428f items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.788000 audit: BPF prog-id=189 op=LOAD Dec 13 00:23:43.788000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe61cb4e80 a2=94 a3=2 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.788000 audit: BPF prog-id=189 op=UNLOAD Dec 13 00:23:43.788000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe61cb4e80 a2=0 a3=2 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.91:22-10.0.0.1:40802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:43.871423 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:40802.service - OpenSSH per-connection server daemon (10.0.0.1:40802). Dec 13 00:23:43.967000 audit[4301]: USER_ACCT pid=4301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:43.969500 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 40802 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:23:43.969000 audit[4301]: CRED_ACQ pid=4301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:43.969000 audit[4301]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc146af6d0 a2=3 a3=0 items=0 ppid=1 pid=4301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.969000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:43.972126 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:43.977979 systemd-logind[1637]: New session 10 of user core. Dec 13 00:23:43.981000 audit: BPF prog-id=190 op=LOAD Dec 13 00:23:43.981000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe61cb4d40 a2=94 a3=1 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.981000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.981000 audit: BPF prog-id=190 op=UNLOAD Dec 13 00:23:43.981000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe61cb4d40 a2=94 a3=1 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.981000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.986440 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 00:23:43.988000 audit[4301]: USER_START pid=4301 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:43.990000 audit[4305]: CRED_ACQ pid=4305 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:43.992000 audit: BPF prog-id=191 op=LOAD Dec 13 00:23:43.992000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe61cb4d30 a2=94 a3=4 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.992000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.992000 audit: BPF prog-id=191 op=UNLOAD Dec 13 00:23:43.992000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe61cb4d30 a2=0 a3=4 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.992000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.993000 audit: BPF prog-id=192 op=LOAD Dec 13 00:23:43.993000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe61cb4b90 a2=94 a3=5 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.993000 audit: BPF prog-id=192 op=UNLOAD Dec 13 00:23:43.993000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe61cb4b90 a2=0 a3=5 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.993000 audit: BPF prog-id=193 op=LOAD Dec 13 00:23:43.993000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe61cb4db0 a2=94 a3=6 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.993000 audit: BPF prog-id=193 op=UNLOAD Dec 13 00:23:43.993000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe61cb4db0 a2=0 a3=6 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.993000 audit: BPF prog-id=194 op=LOAD Dec 13 00:23:43.993000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe61cb4560 a2=94 a3=88 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.993000 audit: BPF prog-id=195 op=LOAD Dec 13 00:23:43.993000 audit[4299]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe61cb43e0 a2=94 a3=2 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.993000 audit: BPF prog-id=195 op=UNLOAD Dec 13 00:23:43.993000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe61cb4410 a2=0 a3=7ffe61cb4510 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:43.994000 audit: BPF prog-id=194 op=UNLOAD Dec 13 00:23:43.994000 audit[4299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=26fd6d10 a2=0 a3=7a28384a84e14f27 items=0 ppid=4156 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:43.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:23:44.007000 audit: BPF prog-id=196 op=LOAD Dec 13 00:23:44.007000 audit[4312]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff25da240 a2=98 a3=1999999999999999 items=0 ppid=4156 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.007000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:23:44.007000 audit: BPF prog-id=196 op=UNLOAD Dec 13 00:23:44.007000 audit[4312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff25da210 a3=0 items=0 ppid=4156 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.007000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:23:44.007000 audit: BPF prog-id=197 op=LOAD Dec 13 00:23:44.007000 audit[4312]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff25da120 a2=94 a3=ffff items=0 ppid=4156 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.007000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:23:44.007000 audit: BPF prog-id=197 op=UNLOAD Dec 13 00:23:44.007000 audit[4312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff25da120 a2=94 a3=ffff items=0 ppid=4156 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.007000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:23:44.007000 audit: BPF prog-id=198 op=LOAD Dec 13 00:23:44.007000 audit[4312]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff25da160 a2=94 a3=7ffff25da340 items=0 ppid=4156 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.007000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:23:44.007000 audit: BPF prog-id=198 op=UNLOAD Dec 13 00:23:44.007000 audit[4312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff25da160 a2=94 a3=7ffff25da340 items=0 ppid=4156 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.007000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:23:44.022755 kubelet[2863]: I1213 00:23:44.022704 2863 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5589a830-43af-418a-8a78-170c45c22ba3" path="/var/lib/kubelet/pods/5589a830-43af-418a-8a78-170c45c22ba3/volumes" Dec 13 00:23:44.027469 kubelet[2863]: E1213 00:23:44.027424 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:44.028691 containerd[1654]: time="2025-12-13T00:23:44.028011948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4dxz,Uid:4b0ba1ca-25ba-459c-a9ca-af094fdfd26e,Namespace:kube-system,Attempt:0,}" Dec 13 00:23:44.030305 containerd[1654]: time="2025-12-13T00:23:44.030232378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b87945cd-4dq42,Uid:af29f9bb-907f-43a0-91d7-4904c3687176,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:23:44.129860 systemd-networkd[1316]: vxlan.calico: Link UP Dec 13 00:23:44.129872 systemd-networkd[1316]: vxlan.calico: Gained carrier Dec 13 00:23:44.171658 sshd[4305]: Connection closed by 10.0.0.1 port 40802 Dec 13 00:23:44.173320 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:44.174000 audit[4301]: USER_END pid=4301 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:44.174000 audit[4301]: CRED_DISP pid=4301 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:44.179000 audit: BPF prog-id=199 op=LOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb854a240 a2=98 a3=0 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=199 op=UNLOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffb854a210 a3=0 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=200 op=LOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb854a050 a2=94 a3=54428f items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=200 op=UNLOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb854a050 a2=94 a3=54428f items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=201 op=LOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb854a080 a2=94 a3=2 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=201 op=UNLOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb854a080 a2=0 a3=2 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=202 op=LOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffb8549e30 a2=94 a3=4 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=202 op=UNLOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffb8549e30 a2=94 a3=4 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=203 op=LOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffb8549f30 a2=94 a3=7fffb854a0b0 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.179000 audit: BPF prog-id=203 op=UNLOAD Dec 13 00:23:44.179000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffb8549f30 a2=0 a3=7fffb854a0b0 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.179000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.184000 audit: BPF prog-id=204 op=LOAD Dec 13 00:23:44.184000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffb8549660 a2=94 a3=2 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.184000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.184000 audit: BPF prog-id=204 op=UNLOAD Dec 13 00:23:44.184000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffb8549660 a2=0 a3=2 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.184000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.184000 audit: BPF prog-id=205 op=LOAD Dec 13 00:23:44.184000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffb8549760 a2=94 a3=30 items=0 ppid=4156 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.184000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:23:44.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.91:22-10.0.0.1:40802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:44.185810 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:40802.service: Deactivated successfully. Dec 13 00:23:44.188992 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 00:23:44.193076 systemd-logind[1637]: Session 10 logged out. Waiting for processes to exit. Dec 13 00:23:44.195484 systemd-logind[1637]: Removed session 10. Dec 13 00:23:44.196000 audit: BPF prog-id=206 op=LOAD Dec 13 00:23:44.196000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1dace280 a2=98 a3=0 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.196000 audit: BPF prog-id=206 op=UNLOAD Dec 13 00:23:44.196000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff1dace250 a3=0 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.196000 audit: BPF prog-id=207 op=LOAD Dec 13 00:23:44.196000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1dace070 a2=94 a3=54428f items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.196000 audit: BPF prog-id=207 op=UNLOAD Dec 13 00:23:44.196000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1dace070 a2=94 a3=54428f items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.196000 audit: BPF prog-id=208 op=LOAD Dec 13 00:23:44.196000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1dace0a0 a2=94 a3=2 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.196000 audit: BPF prog-id=208 op=UNLOAD Dec 13 00:23:44.196000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1dace0a0 a2=0 a3=2 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.221733 systemd-networkd[1316]: calibc1dd76683c: Link UP Dec 13 00:23:44.221965 systemd-networkd[1316]: calibc1dd76683c: Gained carrier Dec 13 00:23:44.233491 containerd[1654]: time="2025-12-13T00:23:44.233440584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:23:44.241417 containerd[1654]: 2025-12-13 00:23:44.099 [INFO][4342] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0 calico-apiserver-59b87945cd- calico-apiserver af29f9bb-907f-43a0-91d7-4904c3687176 848 0 2025-12-13 00:23:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59b87945cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59b87945cd-4dq42 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibc1dd76683c [] [] }} ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-4dq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--4dq42-" Dec 13 00:23:44.241417 containerd[1654]: 2025-12-13 00:23:44.099 [INFO][4342] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-4dq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" Dec 13 00:23:44.241417 containerd[1654]: 2025-12-13 00:23:44.158 [INFO][4366] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" HandleID="k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Workload="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.158 [INFO][4366] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" HandleID="k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Workload="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000582c10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59b87945cd-4dq42", "timestamp":"2025-12-13 00:23:44.158152712 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.158 [INFO][4366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.158 [INFO][4366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.159 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.173 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" host="localhost" Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.184 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.189 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.191 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.194 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:44.241589 containerd[1654]: 2025-12-13 00:23:44.194 [INFO][4366] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" host="localhost" Dec 13 00:23:44.241815 containerd[1654]: 2025-12-13 00:23:44.196 [INFO][4366] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321 Dec 13 00:23:44.241815 containerd[1654]: 2025-12-13 00:23:44.204 [INFO][4366] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" host="localhost" Dec 13 00:23:44.241815 containerd[1654]: 2025-12-13 00:23:44.210 [INFO][4366] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" host="localhost" Dec 13 00:23:44.241815 containerd[1654]: 2025-12-13 00:23:44.210 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" host="localhost" Dec 13 00:23:44.241815 containerd[1654]: 2025-12-13 00:23:44.210 [INFO][4366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:23:44.241815 containerd[1654]: 2025-12-13 00:23:44.210 [INFO][4366] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" HandleID="k8s-pod-network.a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Workload="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" Dec 13 00:23:44.241933 containerd[1654]: 2025-12-13 00:23:44.217 [INFO][4342] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-4dq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0", GenerateName:"calico-apiserver-59b87945cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"af29f9bb-907f-43a0-91d7-4904c3687176", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b87945cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59b87945cd-4dq42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc1dd76683c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:44.241983 containerd[1654]: 2025-12-13 00:23:44.217 [INFO][4342] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-4dq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" Dec 13 00:23:44.241983 containerd[1654]: 2025-12-13 00:23:44.217 [INFO][4342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc1dd76683c ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-4dq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" Dec 13 00:23:44.241983 containerd[1654]: 2025-12-13 00:23:44.221 [INFO][4342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-4dq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" Dec 13 00:23:44.242054 containerd[1654]: 2025-12-13 00:23:44.222 [INFO][4342] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-4dq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0", GenerateName:"calico-apiserver-59b87945cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"af29f9bb-907f-43a0-91d7-4904c3687176", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b87945cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321", Pod:"calico-apiserver-59b87945cd-4dq42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc1dd76683c", MAC:"a2:1e:84:f0:6b:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:44.242101 containerd[1654]: 2025-12-13 00:23:44.236 [INFO][4342] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-4dq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--4dq42-eth0" Dec 13 00:23:44.286098 containerd[1654]: time="2025-12-13T00:23:44.285752264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:23:44.286098 containerd[1654]: time="2025-12-13T00:23:44.285814625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:44.286322 kubelet[2863]: E1213 00:23:44.286100 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:23:44.286322 kubelet[2863]: E1213 00:23:44.286168 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:23:44.287391 kubelet[2863]: E1213 00:23:44.287341 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6bb8446dc4-n7rmd_calico-system(e9fa6631-f723-4789-af25-63888ed257d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:23:44.290289 containerd[1654]: time="2025-12-13T00:23:44.290204516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:23:44.402000 audit: BPF prog-id=209 op=LOAD Dec 13 00:23:44.402000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1dacdf60 a2=94 a3=1 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.402000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.402000 audit: BPF prog-id=209 op=UNLOAD Dec 13 00:23:44.402000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1dacdf60 a2=94 a3=1 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.402000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.414000 audit: BPF prog-id=210 op=LOAD Dec 13 00:23:44.414000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1dacdf50 a2=94 a3=4 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.414000 audit: BPF prog-id=210 op=UNLOAD Dec 13 00:23:44.414000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff1dacdf50 a2=0 a3=4 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.414000 audit: BPF prog-id=211 op=LOAD Dec 13 00:23:44.414000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff1dacddb0 a2=94 a3=5 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.414000 audit: BPF prog-id=211 op=UNLOAD Dec 13 00:23:44.414000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff1dacddb0 a2=0 a3=5 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.414000 audit: BPF prog-id=212 op=LOAD Dec 13 00:23:44.414000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1dacdfd0 a2=94 a3=6 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.414000 audit: BPF prog-id=212 op=UNLOAD Dec 13 00:23:44.414000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff1dacdfd0 a2=0 a3=6 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.414000 audit: BPF prog-id=213 op=LOAD Dec 13 00:23:44.414000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1dacd780 a2=94 a3=88 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.415000 audit: BPF prog-id=214 op=LOAD Dec 13 00:23:44.415000 audit[4402]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff1dacd600 a2=94 a3=2 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.415000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.415000 audit: BPF prog-id=214 op=UNLOAD Dec 13 00:23:44.415000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff1dacd630 a2=0 a3=7fff1dacd730 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.415000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.416000 audit: BPF prog-id=213 op=UNLOAD Dec 13 00:23:44.416000 audit[4402]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1727bd10 a2=0 a3=d2c3d32f02757f48 items=0 ppid=4156 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.416000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:23:44.428000 audit: BPF prog-id=205 op=UNLOAD Dec 13 00:23:44.428000 audit[4156]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0011222c0 a2=0 a3=0 items=0 ppid=4140 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.428000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 13 00:23:44.440574 systemd-networkd[1316]: cali1f187876d6e: Gained IPv6LL Dec 13 00:23:44.503427 systemd-networkd[1316]: cali806d2e8955f: Link UP Dec 13 00:23:44.504537 systemd-networkd[1316]: cali806d2e8955f: Gained carrier Dec 13 00:23:44.514000 audit[4432]: NETFILTER_CFG table=nat:117 family=2 entries=15 op=nft_register_chain pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:44.514000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff277cfc40 a2=0 a3=7fff277cfc2c items=0 ppid=4156 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.514000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:44.517000 audit[4436]: NETFILTER_CFG table=mangle:118 family=2 entries=16 op=nft_register_chain pid=4436 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:44.517000 audit[4436]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffbae30d50 a2=0 a3=7fffbae30d3c items=0 ppid=4156 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.517000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:44.539084 containerd[1654]: time="2025-12-13T00:23:44.539018435Z" level=info msg="connecting to shim a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321" address="unix:///run/containerd/s/60e6dfe9da60b103669591218a29ba31d154269eb42ae941457188bf03f95578" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:44.538000 audit[4433]: NETFILTER_CFG table=raw:119 family=2 entries=21 op=nft_register_chain pid=4433 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:44.538000 audit[4433]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcb42b0690 a2=0 a3=7ffcb42b067c items=0 ppid=4156 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.538000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:44.543360 containerd[1654]: 2025-12-13 00:23:44.085 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--g4dxz-eth0 coredns-66bc5c9577- kube-system 4b0ba1ca-25ba-459c-a9ca-af094fdfd26e 845 0 2025-12-13 00:23:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-g4dxz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali806d2e8955f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Namespace="kube-system" Pod="coredns-66bc5c9577-g4dxz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4dxz-" Dec 13 00:23:44.543360 containerd[1654]: 2025-12-13 00:23:44.085 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Namespace="kube-system" Pod="coredns-66bc5c9577-g4dxz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" Dec 13 00:23:44.543360 containerd[1654]: 2025-12-13 00:23:44.159 [INFO][4359] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" HandleID="k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Workload="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.159 [INFO][4359] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" HandleID="k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Workload="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-g4dxz", "timestamp":"2025-12-13 00:23:44.159805682 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.159 [INFO][4359] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.210 [INFO][4359] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.210 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.271 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" host="localhost" Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.284 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.319 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.433 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.473 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:44.543567 containerd[1654]: 2025-12-13 00:23:44.473 [INFO][4359] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" host="localhost" Dec 13 00:23:44.544077 containerd[1654]: 2025-12-13 00:23:44.475 [INFO][4359] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34 Dec 13 00:23:44.544077 containerd[1654]: 2025-12-13 00:23:44.480 [INFO][4359] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" host="localhost" Dec 13 00:23:44.544077 containerd[1654]: 2025-12-13 00:23:44.490 [INFO][4359] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" host="localhost" Dec 13 00:23:44.544077 containerd[1654]: 2025-12-13 00:23:44.491 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" host="localhost" Dec 13 00:23:44.544077 containerd[1654]: 2025-12-13 00:23:44.492 [INFO][4359] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:23:44.544077 containerd[1654]: 2025-12-13 00:23:44.492 [INFO][4359] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" HandleID="k8s-pod-network.b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Workload="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" Dec 13 00:23:44.544810 containerd[1654]: 2025-12-13 00:23:44.499 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Namespace="kube-system" Pod="coredns-66bc5c9577-g4dxz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g4dxz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4b0ba1ca-25ba-459c-a9ca-af094fdfd26e", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-g4dxz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali806d2e8955f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:44.544810 containerd[1654]: 2025-12-13 00:23:44.499 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Namespace="kube-system" Pod="coredns-66bc5c9577-g4dxz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" Dec 13 00:23:44.544810 containerd[1654]: 2025-12-13 00:23:44.499 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali806d2e8955f ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Namespace="kube-system" Pod="coredns-66bc5c9577-g4dxz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" Dec 13 00:23:44.544810 containerd[1654]: 2025-12-13 00:23:44.503 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Namespace="kube-system" Pod="coredns-66bc5c9577-g4dxz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" Dec 13 00:23:44.544810 containerd[1654]: 2025-12-13 00:23:44.505 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Namespace="kube-system" Pod="coredns-66bc5c9577-g4dxz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g4dxz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4b0ba1ca-25ba-459c-a9ca-af094fdfd26e", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34", Pod:"coredns-66bc5c9577-g4dxz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali806d2e8955f", MAC:"26:9b:38:ea:49:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:44.544810 containerd[1654]: 2025-12-13 00:23:44.535 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" Namespace="kube-system" Pod="coredns-66bc5c9577-g4dxz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4dxz-eth0" Dec 13 00:23:44.543000 audit[4442]: NETFILTER_CFG table=filter:120 family=2 entries=94 op=nft_register_chain pid=4442 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:44.543000 audit[4442]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffee3937150 a2=0 a3=7ffee393713c items=0 ppid=4156 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.543000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:44.587729 systemd[1]: Started cri-containerd-a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321.scope - libcontainer container a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321. Dec 13 00:23:44.588909 containerd[1654]: time="2025-12-13T00:23:44.588863829Z" level=info msg="connecting to shim b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34" address="unix:///run/containerd/s/03a2a7c98e07ec604c7f9b7a9fec156c59c543b8e4eefaa6827fd1193146c8e1" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:44.612000 audit: BPF prog-id=215 op=LOAD Dec 13 00:23:44.613000 audit: BPF prog-id=216 op=LOAD Dec 13 00:23:44.613000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4450 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336138316164633934623064626338646262633832666162623363 Dec 13 00:23:44.613000 audit: BPF prog-id=216 op=UNLOAD Dec 13 00:23:44.613000 audit[4463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4450 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336138316164633934623064626338646262633832666162623363 Dec 13 00:23:44.613000 audit: BPF prog-id=217 op=LOAD Dec 13 00:23:44.613000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4450 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336138316164633934623064626338646262633832666162623363 Dec 13 00:23:44.613000 audit: BPF prog-id=218 op=LOAD Dec 13 00:23:44.613000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4450 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336138316164633934623064626338646262633832666162623363 Dec 13 00:23:44.613000 audit: BPF prog-id=218 op=UNLOAD Dec 13 00:23:44.613000 audit[4463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4450 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336138316164633934623064626338646262633832666162623363 Dec 13 00:23:44.613000 audit: BPF prog-id=217 op=UNLOAD Dec 13 00:23:44.613000 audit[4463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4450 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336138316164633934623064626338646262633832666162623363 Dec 13 00:23:44.613000 audit: BPF prog-id=219 op=LOAD Dec 13 00:23:44.613000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4450 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336138316164633934623064626338646262633832666162623363 Dec 13 00:23:44.615943 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:23:44.628889 systemd[1]: Started cri-containerd-b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34.scope - libcontainer container b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34. Dec 13 00:23:44.642000 audit: BPF prog-id=220 op=LOAD Dec 13 00:23:44.643000 audit: BPF prog-id=221 op=LOAD Dec 13 00:23:44.643000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4489 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236343230653336373863383234633232663262303264373461323139 Dec 13 00:23:44.643000 audit: BPF prog-id=221 op=UNLOAD Dec 13 00:23:44.643000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236343230653336373863383234633232663262303264373461323139 Dec 13 00:23:44.643000 audit: BPF prog-id=222 op=LOAD Dec 13 00:23:44.643000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4489 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236343230653336373863383234633232663262303264373461323139 Dec 13 00:23:44.643000 audit: BPF prog-id=223 op=LOAD Dec 13 00:23:44.643000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4489 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236343230653336373863383234633232663262303264373461323139 Dec 13 00:23:44.643000 audit: BPF prog-id=223 op=UNLOAD Dec 13 00:23:44.643000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236343230653336373863383234633232663262303264373461323139 Dec 13 00:23:44.644000 audit: BPF prog-id=222 op=UNLOAD Dec 13 00:23:44.644000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236343230653336373863383234633232663262303264373461323139 Dec 13 00:23:44.644000 audit: BPF prog-id=224 op=LOAD Dec 13 00:23:44.644000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4489 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236343230653336373863383234633232663262303264373461323139 Dec 13 00:23:44.646926 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:23:44.666000 audit[4537]: NETFILTER_CFG table=filter:121 family=2 entries=84 op=nft_register_chain pid=4537 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:44.666000 audit[4537]: SYSCALL arch=c000003e syscall=46 success=yes exit=47860 a0=3 a1=7ffcdcf67df0 a2=0 a3=7ffcdcf67ddc items=0 ppid=4156 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.666000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:44.679353 containerd[1654]: time="2025-12-13T00:23:44.679291329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b87945cd-4dq42,Uid:af29f9bb-907f-43a0-91d7-4904c3687176,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a83a81adc94b0dbc8dbbc82fabb3c4f3acc5073727db5d1b08d51437db3ec321\"" Dec 13 00:23:44.684555 containerd[1654]: time="2025-12-13T00:23:44.684491388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4dxz,Uid:4b0ba1ca-25ba-459c-a9ca-af094fdfd26e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34\"" Dec 13 00:23:44.685872 kubelet[2863]: E1213 00:23:44.685833 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:44.691026 containerd[1654]: time="2025-12-13T00:23:44.690920308Z" level=info msg="CreateContainer within sandbox \"b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:23:44.706306 containerd[1654]: time="2025-12-13T00:23:44.706220275Z" level=info msg="Container f97156242a6bed7f55501140332f791bb87f516b941c9f304e78e9c3180f4559: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:23:44.712952 containerd[1654]: time="2025-12-13T00:23:44.712909188Z" level=info msg="CreateContainer within sandbox \"b6420e3678c824c22f2b02d74a2192b256715b8dc47f14bbe00b1bd22f44dd34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f97156242a6bed7f55501140332f791bb87f516b941c9f304e78e9c3180f4559\"" Dec 13 00:23:44.713629 containerd[1654]: time="2025-12-13T00:23:44.713596249Z" level=info msg="StartContainer for \"f97156242a6bed7f55501140332f791bb87f516b941c9f304e78e9c3180f4559\"" Dec 13 00:23:44.715001 containerd[1654]: time="2025-12-13T00:23:44.714975270Z" level=info msg="connecting to shim f97156242a6bed7f55501140332f791bb87f516b941c9f304e78e9c3180f4559" address="unix:///run/containerd/s/03a2a7c98e07ec604c7f9b7a9fec156c59c543b8e4eefaa6827fd1193146c8e1" protocol=ttrpc version=3 Dec 13 00:23:44.743463 systemd[1]: Started cri-containerd-f97156242a6bed7f55501140332f791bb87f516b941c9f304e78e9c3180f4559.scope - libcontainer container f97156242a6bed7f55501140332f791bb87f516b941c9f304e78e9c3180f4559. Dec 13 00:23:44.758000 audit: BPF prog-id=225 op=LOAD Dec 13 00:23:44.759000 audit: BPF prog-id=226 op=LOAD Dec 13 00:23:44.759000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4489 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639373135363234326136626564376635353530313134303333326637 Dec 13 00:23:44.759000 audit: BPF prog-id=226 op=UNLOAD Dec 13 00:23:44.759000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639373135363234326136626564376635353530313134303333326637 Dec 13 00:23:44.759000 audit: BPF prog-id=227 op=LOAD Dec 13 00:23:44.759000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4489 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639373135363234326136626564376635353530313134303333326637 Dec 13 00:23:44.759000 audit: BPF prog-id=228 op=LOAD Dec 13 00:23:44.759000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4489 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639373135363234326136626564376635353530313134303333326637 Dec 13 00:23:44.759000 audit: BPF prog-id=228 op=UNLOAD Dec 13 00:23:44.759000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639373135363234326136626564376635353530313134303333326637 Dec 13 00:23:44.759000 audit: BPF prog-id=227 op=UNLOAD Dec 13 00:23:44.759000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4489 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639373135363234326136626564376635353530313134303333326637 Dec 13 00:23:44.759000 audit: BPF prog-id=229 op=LOAD Dec 13 00:23:44.759000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4489 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:44.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639373135363234326136626564376635353530313134303333326637 Dec 13 00:23:44.762830 containerd[1654]: time="2025-12-13T00:23:44.762770303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:23:44.763952 containerd[1654]: time="2025-12-13T00:23:44.763882156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:23:44.764147 containerd[1654]: time="2025-12-13T00:23:44.763975386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:44.764416 kubelet[2863]: E1213 00:23:44.764377 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:23:44.764492 kubelet[2863]: E1213 00:23:44.764427 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:23:44.764762 kubelet[2863]: E1213 00:23:44.764729 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6bb8446dc4-n7rmd_calico-system(e9fa6631-f723-4789-af25-63888ed257d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:23:44.764994 kubelet[2863]: E1213 00:23:44.764948 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bb8446dc4-n7rmd" podUID="e9fa6631-f723-4789-af25-63888ed257d2" Dec 13 00:23:44.765722 containerd[1654]: time="2025-12-13T00:23:44.765469329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:23:44.783960 containerd[1654]: time="2025-12-13T00:23:44.783903407Z" level=info msg="StartContainer for \"f97156242a6bed7f55501140332f791bb87f516b941c9f304e78e9c3180f4559\" returns successfully" Dec 13 00:23:45.085492 containerd[1654]: time="2025-12-13T00:23:45.085429435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:23:45.093951 containerd[1654]: time="2025-12-13T00:23:45.093904473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:23:45.093951 containerd[1654]: time="2025-12-13T00:23:45.093934351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:45.094198 kubelet[2863]: E1213 00:23:45.094152 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:23:45.094345 kubelet[2863]: E1213 00:23:45.094207 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:23:45.094345 kubelet[2863]: E1213 00:23:45.094304 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59b87945cd-4dq42_calico-apiserver(af29f9bb-907f-43a0-91d7-4904c3687176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:23:45.094429 kubelet[2863]: E1213 00:23:45.094336 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" podUID="af29f9bb-907f-43a0-91d7-4904c3687176" Dec 13 00:23:45.272395 systemd-networkd[1316]: calibc1dd76683c: Gained IPv6LL Dec 13 00:23:45.597489 kubelet[2863]: E1213 00:23:45.597431 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" podUID="af29f9bb-907f-43a0-91d7-4904c3687176" Dec 13 00:23:45.600542 kubelet[2863]: E1213 00:23:45.600505 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:45.602510 kubelet[2863]: E1213 00:23:45.602464 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bb8446dc4-n7rmd" podUID="e9fa6631-f723-4789-af25-63888ed257d2" Dec 13 00:23:45.636897 kubelet[2863]: I1213 00:23:45.636822 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g4dxz" podStartSLOduration=42.636807082 podStartE2EDuration="42.636807082s" podCreationTimestamp="2025-12-13 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:23:45.618956284 +0000 UTC m=+47.708228496" watchObservedRunningTime="2025-12-13 00:23:45.636807082 +0000 UTC m=+47.726079295" Dec 13 00:23:45.641000 audit[4581]: NETFILTER_CFG table=filter:122 family=2 entries=20 op=nft_register_rule pid=4581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:45.641000 audit[4581]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe38b3d510 a2=0 a3=7ffe38b3d4fc items=0 ppid=2976 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:45.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:45.659000 audit[4581]: NETFILTER_CFG table=nat:123 family=2 entries=14 op=nft_register_rule pid=4581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:45.659000 audit[4581]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe38b3d510 a2=0 a3=0 items=0 ppid=2976 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:45.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:45.848486 systemd-networkd[1316]: vxlan.calico: Gained IPv6LL Dec 13 00:23:46.104495 systemd-networkd[1316]: cali806d2e8955f: Gained IPv6LL Dec 13 00:23:46.602870 kubelet[2863]: E1213 00:23:46.602794 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:46.604054 kubelet[2863]: E1213 00:23:46.604020 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" podUID="af29f9bb-907f-43a0-91d7-4904c3687176" Dec 13 00:23:46.682000 audit[4583]: NETFILTER_CFG table=filter:124 family=2 entries=17 op=nft_register_rule pid=4583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:46.682000 audit[4583]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd6f078a0 a2=0 a3=7fffd6f0788c items=0 ppid=2976 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:46.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:46.699000 audit[4583]: NETFILTER_CFG table=nat:125 family=2 entries=35 op=nft_register_chain pid=4583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:46.699000 audit[4583]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffd6f078a0 a2=0 a3=7fffd6f0788c items=0 ppid=2976 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:46.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:47.020557 containerd[1654]: time="2025-12-13T00:23:47.020510414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tp27z,Uid:43eaf899-3f04-44ec-95d7-4d02448959a8,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:47.021912 containerd[1654]: time="2025-12-13T00:23:47.021877966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b87945cd-vf98m,Uid:2276dd1e-f4c8-4649-b959-dbfab02532d1,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:23:47.138109 systemd-networkd[1316]: caliad7e57cbd74: Link UP Dec 13 00:23:47.138401 systemd-networkd[1316]: caliad7e57cbd74: Gained carrier Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.070 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tp27z-eth0 csi-node-driver- calico-system 43eaf899-3f04-44ec-95d7-4d02448959a8 718 0 2025-12-13 00:23:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tp27z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliad7e57cbd74 [] [] }} ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Namespace="calico-system" Pod="csi-node-driver-tp27z" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp27z-" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.071 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Namespace="calico-system" Pod="csi-node-driver-tp27z" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp27z-eth0" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.099 [INFO][4615] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" HandleID="k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Workload="localhost-k8s-csi--node--driver--tp27z-eth0" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.099 [INFO][4615] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" HandleID="k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Workload="localhost-k8s-csi--node--driver--tp27z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fed0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tp27z", "timestamp":"2025-12-13 00:23:47.099360156 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.099 [INFO][4615] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.099 [INFO][4615] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.099 [INFO][4615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.108 [INFO][4615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.112 [INFO][4615] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.116 [INFO][4615] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.118 [INFO][4615] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.120 [INFO][4615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.120 [INFO][4615] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.121 [INFO][4615] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.124 [INFO][4615] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.129 [INFO][4615] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.130 [INFO][4615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" host="localhost" Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.130 [INFO][4615] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:23:47.150280 containerd[1654]: 2025-12-13 00:23:47.130 [INFO][4615] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" HandleID="k8s-pod-network.1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Workload="localhost-k8s-csi--node--driver--tp27z-eth0" Dec 13 00:23:47.151159 containerd[1654]: 2025-12-13 00:23:47.133 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Namespace="calico-system" Pod="csi-node-driver-tp27z" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp27z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tp27z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43eaf899-3f04-44ec-95d7-4d02448959a8", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tp27z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad7e57cbd74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:47.151159 containerd[1654]: 2025-12-13 00:23:47.133 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Namespace="calico-system" Pod="csi-node-driver-tp27z" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp27z-eth0" Dec 13 00:23:47.151159 containerd[1654]: 2025-12-13 00:23:47.133 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad7e57cbd74 ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Namespace="calico-system" Pod="csi-node-driver-tp27z" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp27z-eth0" Dec 13 00:23:47.151159 containerd[1654]: 2025-12-13 00:23:47.137 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Namespace="calico-system" Pod="csi-node-driver-tp27z" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp27z-eth0" Dec 13 00:23:47.151159 containerd[1654]: 2025-12-13 00:23:47.138 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Namespace="calico-system" Pod="csi-node-driver-tp27z" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp27z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tp27z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43eaf899-3f04-44ec-95d7-4d02448959a8", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf", Pod:"csi-node-driver-tp27z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad7e57cbd74", MAC:"82:cb:c2:f3:65:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:47.151159 containerd[1654]: 2025-12-13 00:23:47.147 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" Namespace="calico-system" Pod="csi-node-driver-tp27z" WorkloadEndpoint="localhost-k8s-csi--node--driver--tp27z-eth0" Dec 13 00:23:47.172000 audit[4638]: NETFILTER_CFG table=filter:126 family=2 entries=44 op=nft_register_chain pid=4638 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:47.172000 audit[4638]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffcde858d20 a2=0 a3=7ffcde858d0c items=0 ppid=4156 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.172000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:47.197562 containerd[1654]: time="2025-12-13T00:23:47.197484892Z" level=info msg="connecting to shim 1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf" address="unix:///run/containerd/s/fc81704368d662bc0dd509bcbda08e2635b9b72b3a8dd0dba6985a951ff0f5e5" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:47.231517 systemd[1]: Started cri-containerd-1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf.scope - libcontainer container 1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf. Dec 13 00:23:47.246000 audit: BPF prog-id=230 op=LOAD Dec 13 00:23:47.246000 audit: BPF prog-id=231 op=LOAD Dec 13 00:23:47.246000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4646 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343461663565363562323538313430343635626639346137303631 Dec 13 00:23:47.246000 audit: BPF prog-id=231 op=UNLOAD Dec 13 00:23:47.246000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343461663565363562323538313430343635626639346137303631 Dec 13 00:23:47.246000 audit: BPF prog-id=232 op=LOAD Dec 13 00:23:47.246000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4646 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343461663565363562323538313430343635626639346137303631 Dec 13 00:23:47.246000 audit: BPF prog-id=233 op=LOAD Dec 13 00:23:47.246000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4646 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343461663565363562323538313430343635626639346137303631 Dec 13 00:23:47.246000 audit: BPF prog-id=233 op=UNLOAD Dec 13 00:23:47.246000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343461663565363562323538313430343635626639346137303631 Dec 13 00:23:47.247000 audit: BPF prog-id=232 op=UNLOAD Dec 13 00:23:47.247000 audit[4659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343461663565363562323538313430343635626639346137303631 Dec 13 00:23:47.247000 audit: BPF prog-id=234 op=LOAD Dec 13 00:23:47.247000 audit[4659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4646 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136343461663565363562323538313430343635626639346137303631 Dec 13 00:23:47.249110 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:23:47.449871 containerd[1654]: time="2025-12-13T00:23:47.449700180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tp27z,Uid:43eaf899-3f04-44ec-95d7-4d02448959a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1644af5e65b258140465bf94a7061adc07823eed16e4ddc78d8436cac3f12daf\"" Dec 13 00:23:47.451378 containerd[1654]: time="2025-12-13T00:23:47.451352221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:23:47.499285 systemd-networkd[1316]: cali50ab09c3a5a: Link UP Dec 13 00:23:47.500898 systemd-networkd[1316]: cali50ab09c3a5a: Gained carrier Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.069 [INFO][4591] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0 calico-apiserver-59b87945cd- calico-apiserver 2276dd1e-f4c8-4649-b959-dbfab02532d1 846 0 2025-12-13 00:23:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59b87945cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59b87945cd-vf98m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali50ab09c3a5a [] [] }} ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-vf98m" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--vf98m-" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.069 [INFO][4591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-vf98m" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.109 [INFO][4616] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" HandleID="k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Workload="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.109 [INFO][4616] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" HandleID="k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Workload="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59b87945cd-vf98m", "timestamp":"2025-12-13 00:23:47.109003063 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.109 [INFO][4616] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.130 [INFO][4616] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.130 [INFO][4616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.209 [INFO][4616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.214 [INFO][4616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.222 [INFO][4616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.227 [INFO][4616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.230 [INFO][4616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.230 [INFO][4616] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.232 [INFO][4616] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.484 [INFO][4616] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.491 [INFO][4616] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.491 [INFO][4616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" host="localhost" Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.491 [INFO][4616] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:23:47.518923 containerd[1654]: 2025-12-13 00:23:47.491 [INFO][4616] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" HandleID="k8s-pod-network.3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Workload="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" Dec 13 00:23:47.519588 containerd[1654]: 2025-12-13 00:23:47.495 [INFO][4591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-vf98m" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0", GenerateName:"calico-apiserver-59b87945cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2276dd1e-f4c8-4649-b959-dbfab02532d1", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b87945cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59b87945cd-vf98m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50ab09c3a5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:47.519588 containerd[1654]: 2025-12-13 00:23:47.495 [INFO][4591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-vf98m" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" Dec 13 00:23:47.519588 containerd[1654]: 2025-12-13 00:23:47.495 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50ab09c3a5a ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-vf98m" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" Dec 13 00:23:47.519588 containerd[1654]: 2025-12-13 00:23:47.503 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-vf98m" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" Dec 13 00:23:47.519588 containerd[1654]: 2025-12-13 00:23:47.504 [INFO][4591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-vf98m" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0", GenerateName:"calico-apiserver-59b87945cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"2276dd1e-f4c8-4649-b959-dbfab02532d1", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b87945cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd", Pod:"calico-apiserver-59b87945cd-vf98m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50ab09c3a5a", MAC:"e2:5e:b6:f0:24:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:47.519588 containerd[1654]: 2025-12-13 00:23:47.514 [INFO][4591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" Namespace="calico-apiserver" Pod="calico-apiserver-59b87945cd-vf98m" WorkloadEndpoint="localhost-k8s-calico--apiserver--59b87945cd--vf98m-eth0" Dec 13 00:23:47.531000 audit[4696]: NETFILTER_CFG table=filter:127 family=2 entries=49 op=nft_register_chain pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:47.531000 audit[4696]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffcb4e22820 a2=0 a3=7ffcb4e2280c items=0 ppid=4156 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.531000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:47.541393 containerd[1654]: time="2025-12-13T00:23:47.541193315Z" level=info msg="connecting to shim 3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd" address="unix:///run/containerd/s/9531c659b6b28efe3987fc42e1aa289bae76ba8d8a9d0178935d5f48ab148f5b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:47.573481 systemd[1]: Started cri-containerd-3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd.scope - libcontainer container 3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd. Dec 13 00:23:47.587000 audit: BPF prog-id=235 op=LOAD Dec 13 00:23:47.588000 audit: BPF prog-id=236 op=LOAD Dec 13 00:23:47.588000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4706 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613139636463656534303534626165343334353263613864646666 Dec 13 00:23:47.588000 audit: BPF prog-id=236 op=UNLOAD Dec 13 00:23:47.588000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4706 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613139636463656534303534626165343334353263613864646666 Dec 13 00:23:47.588000 audit: BPF prog-id=237 op=LOAD Dec 13 00:23:47.588000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4706 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613139636463656534303534626165343334353263613864646666 Dec 13 00:23:47.588000 audit: BPF prog-id=238 op=LOAD Dec 13 00:23:47.588000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4706 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613139636463656534303534626165343334353263613864646666 Dec 13 00:23:47.588000 audit: BPF prog-id=238 op=UNLOAD Dec 13 00:23:47.588000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4706 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613139636463656534303534626165343334353263613864646666 Dec 13 00:23:47.588000 audit: BPF prog-id=237 op=UNLOAD Dec 13 00:23:47.588000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4706 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613139636463656534303534626165343334353263613864646666 Dec 13 00:23:47.588000 audit: BPF prog-id=239 op=LOAD Dec 13 00:23:47.588000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4706 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:47.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364613139636463656534303534626165343334353263613864646666 Dec 13 00:23:47.590945 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:23:47.605827 kubelet[2863]: E1213 00:23:47.605791 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:47.632358 containerd[1654]: time="2025-12-13T00:23:47.632288402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b87945cd-vf98m,Uid:2276dd1e-f4c8-4649-b959-dbfab02532d1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3da19cdcee4054bae43452ca8ddff7d3b9d9b7920186e45f06cd14691df975fd\"" Dec 13 00:23:47.827963 containerd[1654]: time="2025-12-13T00:23:47.827899100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:23:47.853662 containerd[1654]: time="2025-12-13T00:23:47.853562216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:23:47.853857 containerd[1654]: time="2025-12-13T00:23:47.853607534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:47.853963 kubelet[2863]: E1213 00:23:47.853915 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:23:47.854042 kubelet[2863]: E1213 00:23:47.853969 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:23:47.854258 kubelet[2863]: E1213 00:23:47.854182 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-tp27z_calico-system(43eaf899-3f04-44ec-95d7-4d02448959a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:23:47.854300 containerd[1654]: time="2025-12-13T00:23:47.854279121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:23:48.023086 containerd[1654]: time="2025-12-13T00:23:48.023022947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8shpw,Uid:1fd617ff-92d1-4ae1-9f14-72f718d2a63a,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:48.024614 kubelet[2863]: E1213 00:23:48.024583 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:48.026270 containerd[1654]: time="2025-12-13T00:23:48.025283230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-685xl,Uid:b1c23d96-704b-4a74-9865-eaafccfb9bb7,Namespace:kube-system,Attempt:0,}" Dec 13 00:23:48.026788 containerd[1654]: time="2025-12-13T00:23:48.026698361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54554ff9b-xwvbk,Uid:3bbc635a-68f2-4b21-9037-215dfb791b81,Namespace:calico-system,Attempt:0,}" Dec 13 00:23:48.242364 systemd-networkd[1316]: calia8c0c8ddccd: Link UP Dec 13 00:23:48.245481 systemd-networkd[1316]: calia8c0c8ddccd: Gained carrier Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.088 [INFO][4743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--8shpw-eth0 goldmane-7c778bb748- calico-system 1fd617ff-92d1-4ae1-9f14-72f718d2a63a 849 0 2025-12-13 00:23:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-8shpw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia8c0c8ddccd [] [] }} ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Namespace="calico-system" Pod="goldmane-7c778bb748-8shpw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8shpw-" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.088 [INFO][4743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Namespace="calico-system" Pod="goldmane-7c778bb748-8shpw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.187 [INFO][4790] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" HandleID="k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Workload="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.187 [INFO][4790] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" HandleID="k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Workload="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-8shpw", "timestamp":"2025-12-13 00:23:48.187161989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.187 [INFO][4790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.187 [INFO][4790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.187 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.194 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.200 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.207 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.210 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.212 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.212 [INFO][4790] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.214 [INFO][4790] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.218 [INFO][4790] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.228 [INFO][4790] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.228 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" host="localhost" Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.229 [INFO][4790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:23:48.262922 containerd[1654]: 2025-12-13 00:23:48.229 [INFO][4790] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" HandleID="k8s-pod-network.3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Workload="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" Dec 13 00:23:48.263623 containerd[1654]: 2025-12-13 00:23:48.233 [INFO][4743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Namespace="calico-system" Pod="goldmane-7c778bb748-8shpw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8shpw-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1fd617ff-92d1-4ae1-9f14-72f718d2a63a", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-8shpw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8c0c8ddccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:48.263623 containerd[1654]: 2025-12-13 00:23:48.234 [INFO][4743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Namespace="calico-system" Pod="goldmane-7c778bb748-8shpw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" Dec 13 00:23:48.263623 containerd[1654]: 2025-12-13 00:23:48.234 [INFO][4743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8c0c8ddccd ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Namespace="calico-system" Pod="goldmane-7c778bb748-8shpw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" Dec 13 00:23:48.263623 containerd[1654]: 2025-12-13 00:23:48.245 [INFO][4743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Namespace="calico-system" Pod="goldmane-7c778bb748-8shpw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" Dec 13 00:23:48.263623 containerd[1654]: 2025-12-13 00:23:48.246 [INFO][4743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Namespace="calico-system" Pod="goldmane-7c778bb748-8shpw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8shpw-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1fd617ff-92d1-4ae1-9f14-72f718d2a63a", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e", Pod:"goldmane-7c778bb748-8shpw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8c0c8ddccd", MAC:"1e:00:8a:45:ff:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:48.263623 containerd[1654]: 2025-12-13 00:23:48.259 [INFO][4743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" Namespace="calico-system" Pod="goldmane-7c778bb748-8shpw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8shpw-eth0" Dec 13 00:23:48.273073 containerd[1654]: time="2025-12-13T00:23:48.272986749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:23:48.280000 audit[4826]: NETFILTER_CFG table=filter:128 family=2 entries=66 op=nft_register_chain pid=4826 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:48.280000 audit[4826]: SYSCALL arch=c000003e syscall=46 success=yes exit=32784 a0=3 a1=7ffcfd905970 a2=0 a3=7ffcfd90595c items=0 ppid=4156 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.280000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:48.489347 containerd[1654]: time="2025-12-13T00:23:48.489163180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:23:48.489676 containerd[1654]: time="2025-12-13T00:23:48.489202426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:48.489958 kubelet[2863]: E1213 00:23:48.489897 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:23:48.489958 kubelet[2863]: E1213 00:23:48.489955 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:23:48.490427 kubelet[2863]: E1213 00:23:48.490303 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59b87945cd-vf98m_calico-apiserver(2276dd1e-f4c8-4649-b959-dbfab02532d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:23:48.490427 kubelet[2863]: E1213 00:23:48.490355 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" podUID="2276dd1e-f4c8-4649-b959-dbfab02532d1" Dec 13 00:23:48.490706 containerd[1654]: time="2025-12-13T00:23:48.490381272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:23:48.524542 systemd-networkd[1316]: calid829b7a18cf: Link UP Dec 13 00:23:48.527149 systemd-networkd[1316]: calid829b7a18cf: Gained carrier Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.152 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--685xl-eth0 coredns-66bc5c9577- kube-system b1c23d96-704b-4a74-9865-eaafccfb9bb7 844 0 2025-12-13 00:23:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-685xl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid829b7a18cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Namespace="kube-system" Pod="coredns-66bc5c9577-685xl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--685xl-" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.152 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Namespace="kube-system" Pod="coredns-66bc5c9577-685xl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--685xl-eth0" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.198 [INFO][4796] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" HandleID="k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Workload="localhost-k8s-coredns--66bc5c9577--685xl-eth0" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.199 [INFO][4796] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" HandleID="k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Workload="localhost-k8s-coredns--66bc5c9577--685xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001384c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-685xl", "timestamp":"2025-12-13 00:23:48.198989951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.199 [INFO][4796] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.229 [INFO][4796] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.229 [INFO][4796] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.296 [INFO][4796] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.301 [INFO][4796] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.307 [INFO][4796] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.308 [INFO][4796] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.311 [INFO][4796] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.311 [INFO][4796] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.312 [INFO][4796] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409 Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.487 [INFO][4796] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.517 [INFO][4796] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.517 [INFO][4796] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" host="localhost" Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.517 [INFO][4796] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:23:48.555204 containerd[1654]: 2025-12-13 00:23:48.517 [INFO][4796] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" HandleID="k8s-pod-network.40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Workload="localhost-k8s-coredns--66bc5c9577--685xl-eth0" Dec 13 00:23:48.556776 containerd[1654]: 2025-12-13 00:23:48.520 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Namespace="kube-system" Pod="coredns-66bc5c9577-685xl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--685xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--685xl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b1c23d96-704b-4a74-9865-eaafccfb9bb7", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-685xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid829b7a18cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:48.556776 containerd[1654]: 2025-12-13 00:23:48.521 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Namespace="kube-system" Pod="coredns-66bc5c9577-685xl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--685xl-eth0" Dec 13 00:23:48.556776 containerd[1654]: 2025-12-13 00:23:48.521 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid829b7a18cf ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Namespace="kube-system" Pod="coredns-66bc5c9577-685xl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--685xl-eth0" Dec 13 00:23:48.556776 containerd[1654]: 2025-12-13 00:23:48.527 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Namespace="kube-system" Pod="coredns-66bc5c9577-685xl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--685xl-eth0" Dec 13 00:23:48.556776 containerd[1654]: 2025-12-13 00:23:48.528 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Namespace="kube-system" Pod="coredns-66bc5c9577-685xl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--685xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--685xl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b1c23d96-704b-4a74-9865-eaafccfb9bb7", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409", Pod:"coredns-66bc5c9577-685xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid829b7a18cf", MAC:"7a:bf:19:71:fe:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:48.556776 containerd[1654]: 2025-12-13 00:23:48.548 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" Namespace="kube-system" Pod="coredns-66bc5c9577-685xl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--685xl-eth0" Dec 13 00:23:48.573888 containerd[1654]: time="2025-12-13T00:23:48.573827931Z" level=info msg="connecting to shim 3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e" address="unix:///run/containerd/s/b1ed3edf6c9e3ba875cf5bd408a094b2421aa8359c6d428b63df16e69c2e277b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:48.590299 kernel: kauditd_printk_skb: 355 callbacks suppressed Dec 13 00:23:48.590490 kernel: audit: type=1325 audit(1765585428.584:720): table=filter:129 family=2 entries=48 op=nft_register_chain pid=4849 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:48.584000 audit[4849]: NETFILTER_CFG table=filter:129 family=2 entries=48 op=nft_register_chain pid=4849 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:48.584000 audit[4849]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffdaad00f80 a2=0 a3=7ffdaad00f6c items=0 ppid=4156 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.609481 kubelet[2863]: E1213 00:23:48.609421 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" podUID="2276dd1e-f4c8-4649-b959-dbfab02532d1" Dec 13 00:23:48.584000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:48.620523 kernel: audit: type=1300 audit(1765585428.584:720): arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffdaad00f80 a2=0 a3=7ffdaad00f6c items=0 ppid=4156 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.620629 kernel: audit: type=1327 audit(1765585428.584:720): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:48.645491 systemd[1]: Started cri-containerd-3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e.scope - libcontainer container 3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e. Dec 13 00:23:48.657000 audit: BPF prog-id=240 op=LOAD Dec 13 00:23:48.658000 audit: BPF prog-id=241 op=LOAD Dec 13 00:23:48.661705 kernel: audit: type=1334 audit(1765585428.657:721): prog-id=240 op=LOAD Dec 13 00:23:48.661761 kernel: audit: type=1334 audit(1765585428.658:722): prog-id=241 op=LOAD Dec 13 00:23:48.661792 kernel: audit: type=1300 audit(1765585428.658:722): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.658000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.662056 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:23:48.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.671968 kernel: audit: type=1327 audit(1765585428.658:722): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.658000 audit: BPF prog-id=241 op=UNLOAD Dec 13 00:23:48.673370 kernel: audit: type=1334 audit(1765585428.658:723): prog-id=241 op=UNLOAD Dec 13 00:23:48.658000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.678338 kernel: audit: type=1300 audit(1765585428.658:723): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.683851 kernel: audit: type=1327 audit(1765585428.658:723): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.658000 audit: BPF prog-id=242 op=LOAD Dec 13 00:23:48.658000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.658000 audit: BPF prog-id=243 op=LOAD Dec 13 00:23:48.658000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.658000 audit: BPF prog-id=243 op=UNLOAD Dec 13 00:23:48.658000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.658000 audit: BPF prog-id=242 op=UNLOAD Dec 13 00:23:48.658000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.658000 audit: BPF prog-id=244 op=LOAD Dec 13 00:23:48.658000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4846 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:48.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363626435613538626565383761373165363336623633656639363963 Dec 13 00:23:48.728443 systemd-networkd[1316]: caliad7e57cbd74: Gained IPv6LL Dec 13 00:23:48.854474 containerd[1654]: time="2025-12-13T00:23:48.854327727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8shpw,Uid:1fd617ff-92d1-4ae1-9f14-72f718d2a63a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cbd5a58bee87a71e636b63ef969c4a132850f21564c4d18d523a4c826d5525e\"" Dec 13 00:23:48.897634 containerd[1654]: time="2025-12-13T00:23:48.897557555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:23:49.048346 containerd[1654]: time="2025-12-13T00:23:49.048182254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:23:49.048827 containerd[1654]: time="2025-12-13T00:23:49.048219185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:49.048863 kubelet[2863]: E1213 00:23:49.048535 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:23:49.048863 kubelet[2863]: E1213 00:23:49.048577 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:23:49.048863 kubelet[2863]: E1213 00:23:49.048739 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-tp27z_calico-system(43eaf899-3f04-44ec-95d7-4d02448959a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:23:49.048863 kubelet[2863]: E1213 00:23:49.048790 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:49.049024 containerd[1654]: time="2025-12-13T00:23:49.048931459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:23:49.118000 audit[4894]: NETFILTER_CFG table=filter:130 family=2 entries=14 op=nft_register_rule pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:49.118000 audit[4894]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff97b16500 a2=0 a3=7fff97b164ec items=0 ppid=2976 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:49.126509 systemd-networkd[1316]: caliecdd60203c1: Link UP Dec 13 00:23:49.127978 containerd[1654]: time="2025-12-13T00:23:49.127729824Z" level=info msg="connecting to shim 40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409" address="unix:///run/containerd/s/59be3600c9cc0f3322788bd0c53bc727ad7264a1cd8c7f29b696d94173dd893b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:49.128000 audit[4894]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:49.128000 audit[4894]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff97b16500 a2=0 a3=7fff97b164ec items=0 ppid=2976 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:49.132893 systemd-networkd[1316]: caliecdd60203c1: Gained carrier Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.155 [INFO][4771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0 calico-kube-controllers-54554ff9b- calico-system 3bbc635a-68f2-4b21-9037-215dfb791b81 850 0 2025-12-13 00:23:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54554ff9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54554ff9b-xwvbk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliecdd60203c1 [] [] }} ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Namespace="calico-system" Pod="calico-kube-controllers-54554ff9b-xwvbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.156 [INFO][4771] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Namespace="calico-system" Pod="calico-kube-controllers-54554ff9b-xwvbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.211 [INFO][4802] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" HandleID="k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Workload="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.211 [INFO][4802] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" HandleID="k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Workload="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54554ff9b-xwvbk", "timestamp":"2025-12-13 00:23:48.211575194 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.212 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.517 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.517 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.557 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.566 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.575 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.579 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.583 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.583 [INFO][4802] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:48.806 [INFO][4802] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:49.094 [INFO][4802] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:49.108 [INFO][4802] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:49.108 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" host="localhost" Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:49.108 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:23:49.164259 containerd[1654]: 2025-12-13 00:23:49.108 [INFO][4802] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" HandleID="k8s-pod-network.54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Workload="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" Dec 13 00:23:49.164983 containerd[1654]: 2025-12-13 00:23:49.114 [INFO][4771] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Namespace="calico-system" Pod="calico-kube-controllers-54554ff9b-xwvbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0", GenerateName:"calico-kube-controllers-54554ff9b-", Namespace:"calico-system", SelfLink:"", UID:"3bbc635a-68f2-4b21-9037-215dfb791b81", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54554ff9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54554ff9b-xwvbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecdd60203c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:49.164983 containerd[1654]: 2025-12-13 00:23:49.115 [INFO][4771] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Namespace="calico-system" Pod="calico-kube-controllers-54554ff9b-xwvbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" Dec 13 00:23:49.164983 containerd[1654]: 2025-12-13 00:23:49.115 [INFO][4771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecdd60203c1 ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Namespace="calico-system" Pod="calico-kube-controllers-54554ff9b-xwvbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" Dec 13 00:23:49.164983 containerd[1654]: 2025-12-13 00:23:49.135 [INFO][4771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Namespace="calico-system" Pod="calico-kube-controllers-54554ff9b-xwvbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" Dec 13 00:23:49.164983 containerd[1654]: 2025-12-13 00:23:49.139 [INFO][4771] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Namespace="calico-system" Pod="calico-kube-controllers-54554ff9b-xwvbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0", GenerateName:"calico-kube-controllers-54554ff9b-", Namespace:"calico-system", SelfLink:"", UID:"3bbc635a-68f2-4b21-9037-215dfb791b81", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 23, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54554ff9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b", Pod:"calico-kube-controllers-54554ff9b-xwvbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecdd60203c1", MAC:"a6:fd:0c:4d:03:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:23:49.164983 containerd[1654]: 2025-12-13 00:23:49.155 [INFO][4771] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" Namespace="calico-system" Pod="calico-kube-controllers-54554ff9b-xwvbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54554ff9b--xwvbk-eth0" Dec 13 00:23:49.189000 audit[4928]: NETFILTER_CFG table=filter:132 family=2 entries=56 op=nft_register_chain pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:23:49.189000 audit[4928]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7fff26e590d0 a2=0 a3=7fff26e590bc items=0 ppid=4156 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.189000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:23:49.194471 systemd[1]: Started cri-containerd-40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409.scope - libcontainer container 40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409. Dec 13 00:23:49.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.91:22-10.0.0.1:40804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:49.199451 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:40804.service - OpenSSH per-connection server daemon (10.0.0.1:40804). Dec 13 00:23:49.213000 audit: BPF prog-id=245 op=LOAD Dec 13 00:23:49.221000 audit: BPF prog-id=246 op=LOAD Dec 13 00:23:49.221000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=4896 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430373434646435623836623832323766396431633766313364313039 Dec 13 00:23:49.223000 audit: BPF prog-id=246 op=UNLOAD Dec 13 00:23:49.223000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4896 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430373434646435623836623832323766396431633766313364313039 Dec 13 00:23:49.225000 audit: BPF prog-id=247 op=LOAD Dec 13 00:23:49.225000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=4896 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430373434646435623836623832323766396431633766313364313039 Dec 13 00:23:49.225000 audit: BPF prog-id=248 op=LOAD Dec 13 00:23:49.225000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=4896 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430373434646435623836623832323766396431633766313364313039 Dec 13 00:23:49.226000 audit: BPF prog-id=248 op=UNLOAD Dec 13 00:23:49.226000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4896 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430373434646435623836623832323766396431633766313364313039 Dec 13 00:23:49.228056 containerd[1654]: time="2025-12-13T00:23:49.227470024Z" level=info msg="connecting to shim 54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b" address="unix:///run/containerd/s/b1ae225444b8b0dab9a37725ced3ce8c4d896e13e26f0baba180046b69f32692" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:23:49.226000 audit: BPF prog-id=247 op=UNLOAD Dec 13 00:23:49.226000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4896 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430373434646435623836623832323766396431633766313364313039 Dec 13 00:23:49.227000 audit: BPF prog-id=249 op=LOAD Dec 13 00:23:49.227000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=4896 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430373434646435623836623832323766396431633766313364313039 Dec 13 00:23:49.238648 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:23:49.268470 systemd[1]: Started cri-containerd-54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b.scope - libcontainer container 54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b. Dec 13 00:23:49.292000 audit: BPF prog-id=250 op=LOAD Dec 13 00:23:49.292000 audit: BPF prog-id=251 op=LOAD Dec 13 00:23:49.292000 audit[4957]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534636663653163333934356463306664663938346330616463376666 Dec 13 00:23:49.292000 audit: BPF prog-id=251 op=UNLOAD Dec 13 00:23:49.292000 audit[4957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534636663653163333934356463306664663938346330616463376666 Dec 13 00:23:49.293000 audit: BPF prog-id=252 op=LOAD Dec 13 00:23:49.293000 audit[4957]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534636663653163333934356463306664663938346330616463376666 Dec 13 00:23:49.294000 audit: BPF prog-id=253 op=LOAD Dec 13 00:23:49.294000 audit[4957]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534636663653163333934356463306664663938346330616463376666 Dec 13 00:23:49.294000 audit: BPF prog-id=253 op=UNLOAD Dec 13 00:23:49.294000 audit[4957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534636663653163333934356463306664663938346330616463376666 Dec 13 00:23:49.294000 audit: BPF prog-id=252 op=UNLOAD Dec 13 00:23:49.294000 audit[4957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534636663653163333934356463306664663938346330616463376666 Dec 13 00:23:49.294000 audit: BPF prog-id=254 op=LOAD Dec 13 00:23:49.294000 audit[4957]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534636663653163333934356463306664663938346330616463376666 Dec 13 00:23:49.298639 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:23:49.300000 audit[4929]: USER_ACCT pid=4929 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.302000 audit[4929]: CRED_ACQ pid=4929 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.302000 audit[4929]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc59429f80 a2=3 a3=0 items=0 ppid=1 pid=4929 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.302000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:49.304439 sshd[4929]: Accepted publickey for core from 10.0.0.1 port 40804 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:23:49.304838 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:49.310802 containerd[1654]: time="2025-12-13T00:23:49.310751547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-685xl,Uid:b1c23d96-704b-4a74-9865-eaafccfb9bb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409\"" Dec 13 00:23:49.311939 kubelet[2863]: E1213 00:23:49.311896 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:49.312582 systemd-logind[1637]: New session 11 of user core. Dec 13 00:23:49.319741 containerd[1654]: time="2025-12-13T00:23:49.319595196Z" level=info msg="CreateContainer within sandbox \"40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:23:49.320483 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 00:23:49.325000 audit[4929]: USER_START pid=4929 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.328000 audit[4985]: CRED_ACQ pid=4985 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.334708 containerd[1654]: time="2025-12-13T00:23:49.334644244Z" level=info msg="Container baed76ce6ba092cdd346df9c6cf67561df9ba54d1340d0bf5ae1152fa2cae81a: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:23:49.346805 containerd[1654]: time="2025-12-13T00:23:49.346737280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54554ff9b-xwvbk,Uid:3bbc635a-68f2-4b21-9037-215dfb791b81,Namespace:calico-system,Attempt:0,} returns sandbox id \"54cfce1c3945dc0fdf984c0adc7ffc2a870d5bb065a173fe4c2b737040eaeb0b\"" Dec 13 00:23:49.348878 containerd[1654]: time="2025-12-13T00:23:49.348821491Z" level=info msg="CreateContainer within sandbox \"40744dd5b86b8227f9d1c7f13d109c1796125f2c5984d3be9b0d4c135afe1409\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baed76ce6ba092cdd346df9c6cf67561df9ba54d1340d0bf5ae1152fa2cae81a\"" Dec 13 00:23:49.349333 containerd[1654]: time="2025-12-13T00:23:49.349303951Z" level=info msg="StartContainer for \"baed76ce6ba092cdd346df9c6cf67561df9ba54d1340d0bf5ae1152fa2cae81a\"" Dec 13 00:23:49.350510 containerd[1654]: time="2025-12-13T00:23:49.350484398Z" level=info msg="connecting to shim baed76ce6ba092cdd346df9c6cf67561df9ba54d1340d0bf5ae1152fa2cae81a" address="unix:///run/containerd/s/59be3600c9cc0f3322788bd0c53bc727ad7264a1cd8c7f29b696d94173dd893b" protocol=ttrpc version=3 Dec 13 00:23:49.378500 systemd[1]: Started cri-containerd-baed76ce6ba092cdd346df9c6cf67561df9ba54d1340d0bf5ae1152fa2cae81a.scope - libcontainer container baed76ce6ba092cdd346df9c6cf67561df9ba54d1340d0bf5ae1152fa2cae81a. Dec 13 00:23:49.397000 audit: BPF prog-id=255 op=LOAD Dec 13 00:23:49.397000 audit: BPF prog-id=256 op=LOAD Dec 13 00:23:49.397000 audit[4996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4896 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261656437366365366261303932636464333436646639633663663637 Dec 13 00:23:49.398000 audit: BPF prog-id=256 op=UNLOAD Dec 13 00:23:49.398000 audit[4996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4896 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261656437366365366261303932636464333436646639633663663637 Dec 13 00:23:49.398000 audit: BPF prog-id=257 op=LOAD Dec 13 00:23:49.398000 audit[4996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4896 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261656437366365366261303932636464333436646639633663663637 Dec 13 00:23:49.398000 audit: BPF prog-id=258 op=LOAD Dec 13 00:23:49.398000 audit[4996]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4896 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261656437366365366261303932636464333436646639633663663637 Dec 13 00:23:49.399000 audit: BPF prog-id=258 op=UNLOAD Dec 13 00:23:49.399000 audit[4996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4896 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261656437366365366261303932636464333436646639633663663637 Dec 13 00:23:49.399000 audit: BPF prog-id=257 op=UNLOAD Dec 13 00:23:49.399000 audit[4996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4896 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261656437366365366261303932636464333436646639633663663637 Dec 13 00:23:49.399000 audit: BPF prog-id=259 op=LOAD Dec 13 00:23:49.399000 audit[4996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4896 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261656437366365366261303932636464333436646639633663663637 Dec 13 00:23:49.428680 containerd[1654]: time="2025-12-13T00:23:49.428639453Z" level=info msg="StartContainer for \"baed76ce6ba092cdd346df9c6cf67561df9ba54d1340d0bf5ae1152fa2cae81a\" returns successfully" Dec 13 00:23:49.453130 containerd[1654]: time="2025-12-13T00:23:49.453073784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:23:49.454545 containerd[1654]: time="2025-12-13T00:23:49.454489756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:23:49.454616 containerd[1654]: time="2025-12-13T00:23:49.454585520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:49.454850 kubelet[2863]: E1213 00:23:49.454778 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:23:49.454912 kubelet[2863]: E1213 00:23:49.454857 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:23:49.455372 containerd[1654]: time="2025-12-13T00:23:49.455173614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:23:49.456322 kubelet[2863]: E1213 00:23:49.455028 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8shpw_calico-system(1fd617ff-92d1-4ae1-9f14-72f718d2a63a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:23:49.456500 kubelet[2863]: E1213 00:23:49.456379 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8shpw" podUID="1fd617ff-92d1-4ae1-9f14-72f718d2a63a" Dec 13 00:23:49.460183 sshd[4985]: Connection closed by 10.0.0.1 port 40804 Dec 13 00:23:49.461838 sshd-session[4929]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:49.462000 audit[4929]: USER_END pid=4929 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.462000 audit[4929]: CRED_DISP pid=4929 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.471206 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:40804.service: Deactivated successfully. Dec 13 00:23:49.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.91:22-10.0.0.1:40804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:49.474549 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 00:23:49.475885 systemd-logind[1637]: Session 11 logged out. Waiting for processes to exit. Dec 13 00:23:49.478281 systemd-logind[1637]: Removed session 11. Dec 13 00:23:49.480692 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:40810.service - OpenSSH per-connection server daemon (10.0.0.1:40810). Dec 13 00:23:49.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.91:22-10.0.0.1:40810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:49.496390 systemd-networkd[1316]: cali50ab09c3a5a: Gained IPv6LL Dec 13 00:23:49.545000 audit[5039]: USER_ACCT pid=5039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.546578 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 40810 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:23:49.546000 audit[5039]: CRED_ACQ pid=5039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.546000 audit[5039]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc12754bb0 a2=3 a3=0 items=0 ppid=1 pid=5039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.546000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:49.549014 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:49.554283 systemd-logind[1637]: New session 12 of user core. Dec 13 00:23:49.560525 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 00:23:49.562000 audit[5039]: USER_START pid=5039 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.564000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.616065 kubelet[2863]: E1213 00:23:49.615984 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:49.619382 kubelet[2863]: E1213 00:23:49.619308 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8shpw" podUID="1fd617ff-92d1-4ae1-9f14-72f718d2a63a" Dec 13 00:23:49.621469 kubelet[2863]: E1213 00:23:49.621417 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" podUID="2276dd1e-f4c8-4649-b959-dbfab02532d1" Dec 13 00:23:49.623697 kubelet[2863]: E1213 00:23:49.623647 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:23:49.657470 kubelet[2863]: I1213 00:23:49.656696 2863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-685xl" podStartSLOduration=46.656680894 podStartE2EDuration="46.656680894s" podCreationTimestamp="2025-12-13 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:23:49.638761099 +0000 UTC m=+51.728033331" watchObservedRunningTime="2025-12-13 00:23:49.656680894 +0000 UTC m=+51.745953106" Dec 13 00:23:49.704733 sshd[5050]: Connection closed by 10.0.0.1 port 40810 Dec 13 00:23:49.705201 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:49.706000 audit[5039]: USER_END pid=5039 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.706000 audit[5039]: CRED_DISP pid=5039 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.91:22-10.0.0.1:40810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:49.719694 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:40810.service: Deactivated successfully. Dec 13 00:23:49.725399 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 00:23:49.727212 systemd-logind[1637]: Session 12 logged out. Waiting for processes to exit. Dec 13 00:23:49.734923 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:40820.service - OpenSSH per-connection server daemon (10.0.0.1:40820). Dec 13 00:23:49.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.91:22-10.0.0.1:40820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:49.736617 systemd-logind[1637]: Removed session 12. Dec 13 00:23:49.752434 systemd-networkd[1316]: calid829b7a18cf: Gained IPv6LL Dec 13 00:23:49.793000 audit[5061]: USER_ACCT pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.794639 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 40820 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:23:49.794000 audit[5061]: CRED_ACQ pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.794000 audit[5061]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd718e7770 a2=3 a3=0 items=0 ppid=1 pid=5061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:49.794000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:49.796917 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:49.801911 systemd-logind[1637]: New session 13 of user core. Dec 13 00:23:49.814404 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 00:23:49.815000 audit[5061]: USER_START pid=5061 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.818000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.828258 containerd[1654]: time="2025-12-13T00:23:49.828197394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:23:49.829446 containerd[1654]: time="2025-12-13T00:23:49.829392990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:23:49.829531 containerd[1654]: time="2025-12-13T00:23:49.829490800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:23:49.830276 kubelet[2863]: E1213 00:23:49.830192 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:23:49.830738 kubelet[2863]: E1213 00:23:49.830357 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:23:49.830738 kubelet[2863]: E1213 00:23:49.830442 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54554ff9b-xwvbk_calico-system(3bbc635a-68f2-4b21-9037-215dfb791b81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:23:49.830738 kubelet[2863]: E1213 00:23:49.830474 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" podUID="3bbc635a-68f2-4b21-9037-215dfb791b81" Dec 13 00:23:49.915430 sshd[5065]: Connection closed by 10.0.0.1 port 40820 Dec 13 00:23:49.915667 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:49.916000 audit[5061]: USER_END pid=5061 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.916000 audit[5061]: CRED_DISP pid=5061 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:49.920611 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:40820.service: Deactivated successfully. Dec 13 00:23:49.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.91:22-10.0.0.1:40820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:49.922999 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 00:23:49.924115 systemd-logind[1637]: Session 13 logged out. Waiting for processes to exit. Dec 13 00:23:49.925942 systemd-logind[1637]: Removed session 13. Dec 13 00:23:49.944572 systemd-networkd[1316]: calia8c0c8ddccd: Gained IPv6LL Dec 13 00:23:50.145000 audit[5079]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=5079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:50.145000 audit[5079]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb10475d0 a2=0 a3=7fffb10475bc items=0 ppid=2976 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:50.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:50.157000 audit[5079]: NETFILTER_CFG table=nat:134 family=2 entries=56 op=nft_register_chain pid=5079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:50.157000 audit[5079]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffb10475d0 a2=0 a3=7fffb10475bc items=0 ppid=2976 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:50.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:50.620664 kubelet[2863]: E1213 00:23:50.620356 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:50.621170 kubelet[2863]: E1213 00:23:50.621131 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8shpw" podUID="1fd617ff-92d1-4ae1-9f14-72f718d2a63a" Dec 13 00:23:50.622439 kubelet[2863]: E1213 00:23:50.621604 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" podUID="3bbc635a-68f2-4b21-9037-215dfb791b81" Dec 13 00:23:50.904491 systemd-networkd[1316]: caliecdd60203c1: Gained IPv6LL Dec 13 00:23:54.931530 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:54440.service - OpenSSH per-connection server daemon (10.0.0.1:54440). Dec 13 00:23:54.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.91:22-10.0.0.1:54440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:54.933090 kernel: kauditd_printk_skb: 129 callbacks suppressed Dec 13 00:23:54.933160 kernel: audit: type=1130 audit(1765585434.930:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.91:22-10.0.0.1:54440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:55.001000 audit[5090]: USER_ACCT pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.002748 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 54440 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:23:55.005530 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:55.011588 systemd-logind[1637]: New session 14 of user core. Dec 13 00:23:55.002000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.028098 kernel: audit: type=1101 audit(1765585435.001:786): pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.028151 kernel: audit: type=1103 audit(1765585435.002:787): pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.028176 kernel: audit: type=1006 audit(1765585435.002:788): pid=5090 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 00:23:55.002000 audit[5090]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3b7a9bc0 a2=3 a3=0 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:55.002000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:55.038351 kernel: audit: type=1300 audit(1765585435.002:788): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3b7a9bc0 a2=3 a3=0 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:55.038407 kernel: audit: type=1327 audit(1765585435.002:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:55.040714 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 00:23:55.042000 audit[5090]: USER_START pid=5090 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.045000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.055392 kernel: audit: type=1105 audit(1765585435.042:789): pid=5090 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.056199 kernel: audit: type=1103 audit(1765585435.045:790): pid=5094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.130246 sshd[5094]: Connection closed by 10.0.0.1 port 54440 Dec 13 00:23:55.130561 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:55.130000 audit[5090]: USER_END pid=5090 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.136107 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:54440.service: Deactivated successfully. Dec 13 00:23:55.138598 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 00:23:55.139699 systemd-logind[1637]: Session 14 logged out. Waiting for processes to exit. Dec 13 00:23:55.141098 systemd-logind[1637]: Removed session 14. Dec 13 00:23:55.131000 audit[5090]: CRED_DISP pid=5090 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.150461 kernel: audit: type=1106 audit(1765585435.130:791): pid=5090 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.150606 kernel: audit: type=1104 audit(1765585435.131:792): pid=5090 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:55.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.91:22-10.0.0.1:54440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:00.019095 containerd[1654]: time="2025-12-13T00:24:00.018907521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:24:00.152154 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:33472.service - OpenSSH per-connection server daemon (10.0.0.1:33472). Dec 13 00:24:00.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.91:22-10.0.0.1:33472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:00.153576 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:24:00.153651 kernel: audit: type=1130 audit(1765585440.152:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.91:22-10.0.0.1:33472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:00.216000 audit[5115]: USER_ACCT pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.217295 sshd[5115]: Accepted publickey for core from 10.0.0.1 port 33472 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:00.219607 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:00.217000 audit[5115]: CRED_ACQ pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.224903 systemd-logind[1637]: New session 15 of user core. Dec 13 00:24:00.229689 kernel: audit: type=1101 audit(1765585440.216:795): pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.229757 kernel: audit: type=1103 audit(1765585440.217:796): pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.232528 kernel: audit: type=1006 audit(1765585440.218:797): pid=5115 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 00:24:00.218000 audit[5115]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb542ef70 a2=3 a3=0 items=0 ppid=1 pid=5115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:00.234423 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 00:24:00.237725 kernel: audit: type=1300 audit(1765585440.218:797): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb542ef70 a2=3 a3=0 items=0 ppid=1 pid=5115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:00.237839 kernel: audit: type=1327 audit(1765585440.218:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:00.218000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:00.236000 audit[5115]: USER_START pid=5115 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.245514 kernel: audit: type=1105 audit(1765585440.236:798): pid=5115 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.245643 kernel: audit: type=1103 audit(1765585440.238:799): pid=5119 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.238000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.427644 sshd[5119]: Connection closed by 10.0.0.1 port 33472 Dec 13 00:24:00.427628 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:00.429000 audit[5115]: USER_END pid=5115 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.434538 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:33472.service: Deactivated successfully. Dec 13 00:24:00.429000 audit[5115]: CRED_DISP pid=5115 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.437762 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 00:24:00.439198 systemd-logind[1637]: Session 15 logged out. Waiting for processes to exit. Dec 13 00:24:00.441009 systemd-logind[1637]: Removed session 15. Dec 13 00:24:00.441491 kernel: audit: type=1106 audit(1765585440.429:800): pid=5115 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.441595 kernel: audit: type=1104 audit(1765585440.429:801): pid=5115 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:00.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.91:22-10.0.0.1:33472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:00.544579 containerd[1654]: time="2025-12-13T00:24:00.544508606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:00.727995 containerd[1654]: time="2025-12-13T00:24:00.727936758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:00.728151 containerd[1654]: time="2025-12-13T00:24:00.728069113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:24:00.728407 kubelet[2863]: E1213 00:24:00.728372 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:24:00.728891 kubelet[2863]: E1213 00:24:00.728418 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:24:00.728891 kubelet[2863]: E1213 00:24:00.728649 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6bb8446dc4-n7rmd_calico-system(e9fa6631-f723-4789-af25-63888ed257d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:00.728954 containerd[1654]: time="2025-12-13T00:24:00.728796165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:24:01.170196 containerd[1654]: time="2025-12-13T00:24:01.169938696Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:01.172104 containerd[1654]: time="2025-12-13T00:24:01.171849876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:24:01.172372 containerd[1654]: time="2025-12-13T00:24:01.171917856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:01.172956 kubelet[2863]: E1213 00:24:01.172906 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:24:01.172956 kubelet[2863]: E1213 00:24:01.172953 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:24:01.173602 containerd[1654]: time="2025-12-13T00:24:01.173291136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:24:01.175430 kubelet[2863]: E1213 00:24:01.173136 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59b87945cd-4dq42_calico-apiserver(af29f9bb-907f-43a0-91d7-4904c3687176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:01.175430 kubelet[2863]: E1213 00:24:01.173860 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" podUID="af29f9bb-907f-43a0-91d7-4904c3687176" Dec 13 00:24:01.558789 containerd[1654]: time="2025-12-13T00:24:01.558725313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:01.560358 containerd[1654]: time="2025-12-13T00:24:01.560189267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:24:01.560358 containerd[1654]: time="2025-12-13T00:24:01.560266544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:01.560704 kubelet[2863]: E1213 00:24:01.560656 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:24:01.560825 kubelet[2863]: E1213 00:24:01.560713 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:24:01.560825 kubelet[2863]: E1213 00:24:01.560793 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6bb8446dc4-n7rmd_calico-system(e9fa6631-f723-4789-af25-63888ed257d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:01.560885 kubelet[2863]: E1213 00:24:01.560839 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bb8446dc4-n7rmd" podUID="e9fa6631-f723-4789-af25-63888ed257d2" Dec 13 00:24:02.019727 containerd[1654]: time="2025-12-13T00:24:02.019389788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:24:02.488921 containerd[1654]: time="2025-12-13T00:24:02.488857459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:02.506278 containerd[1654]: time="2025-12-13T00:24:02.506150479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:24:02.506817 containerd[1654]: time="2025-12-13T00:24:02.506652279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:02.506857 kubelet[2863]: E1213 00:24:02.506701 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:24:02.506857 kubelet[2863]: E1213 00:24:02.506778 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:24:02.507457 kubelet[2863]: E1213 00:24:02.507064 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54554ff9b-xwvbk_calico-system(3bbc635a-68f2-4b21-9037-215dfb791b81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:02.507457 kubelet[2863]: E1213 00:24:02.507143 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" podUID="3bbc635a-68f2-4b21-9037-215dfb791b81" Dec 13 00:24:02.507521 containerd[1654]: time="2025-12-13T00:24:02.507279359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:24:02.914860 containerd[1654]: time="2025-12-13T00:24:02.914572835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:02.985669 containerd[1654]: time="2025-12-13T00:24:02.985587370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:24:02.985852 containerd[1654]: time="2025-12-13T00:24:02.985622528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:02.985996 kubelet[2863]: E1213 00:24:02.985934 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:24:02.985996 kubelet[2863]: E1213 00:24:02.985993 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:24:02.986307 kubelet[2863]: E1213 00:24:02.986221 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59b87945cd-vf98m_calico-apiserver(2276dd1e-f4c8-4649-b959-dbfab02532d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:02.986379 kubelet[2863]: E1213 00:24:02.986314 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" podUID="2276dd1e-f4c8-4649-b959-dbfab02532d1" Dec 13 00:24:02.986418 containerd[1654]: time="2025-12-13T00:24:02.986388855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:24:03.400439 containerd[1654]: time="2025-12-13T00:24:03.400363155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:03.418347 containerd[1654]: time="2025-12-13T00:24:03.418257230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:24:03.418519 containerd[1654]: time="2025-12-13T00:24:03.418351370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:03.418699 kubelet[2863]: E1213 00:24:03.418626 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:24:03.418699 kubelet[2863]: E1213 00:24:03.418698 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:24:03.419032 kubelet[2863]: E1213 00:24:03.418978 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-tp27z_calico-system(43eaf899-3f04-44ec-95d7-4d02448959a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:03.420203 containerd[1654]: time="2025-12-13T00:24:03.420159880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:24:03.826137 containerd[1654]: time="2025-12-13T00:24:03.826061232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:03.935854 containerd[1654]: time="2025-12-13T00:24:03.935745866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:24:03.935854 containerd[1654]: time="2025-12-13T00:24:03.935837262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:03.936252 kubelet[2863]: E1213 00:24:03.936191 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:24:03.936719 kubelet[2863]: E1213 00:24:03.936261 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:24:03.936719 kubelet[2863]: E1213 00:24:03.936508 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8shpw_calico-system(1fd617ff-92d1-4ae1-9f14-72f718d2a63a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:03.936719 kubelet[2863]: E1213 00:24:03.936585 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8shpw" podUID="1fd617ff-92d1-4ae1-9f14-72f718d2a63a" Dec 13 00:24:03.936856 containerd[1654]: time="2025-12-13T00:24:03.936750398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:24:04.281873 containerd[1654]: time="2025-12-13T00:24:04.281768133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:04.283224 containerd[1654]: time="2025-12-13T00:24:04.283176877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:24:04.283320 containerd[1654]: time="2025-12-13T00:24:04.283256800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:04.283530 kubelet[2863]: E1213 00:24:04.283483 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:24:04.283598 kubelet[2863]: E1213 00:24:04.283544 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:24:04.283658 kubelet[2863]: E1213 00:24:04.283636 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-tp27z_calico-system(43eaf899-3f04-44ec-95d7-4d02448959a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:04.283722 kubelet[2863]: E1213 00:24:04.283687 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:24:05.447124 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:33486.service - OpenSSH per-connection server daemon (10.0.0.1:33486). Dec 13 00:24:05.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.91:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:05.476744 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:24:05.476856 kernel: audit: type=1130 audit(1765585445.446:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.91:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:05.542000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.544172 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 33486 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:05.546687 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:05.551919 systemd-logind[1637]: New session 16 of user core. Dec 13 00:24:05.543000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.584134 kernel: audit: type=1101 audit(1765585445.542:804): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.584215 kernel: audit: type=1103 audit(1765585445.543:805): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.584278 kernel: audit: type=1006 audit(1765585445.544:806): pid=5142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 00:24:05.587321 kernel: audit: type=1300 audit(1765585445.544:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc07e439d0 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:05.544000 audit[5142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc07e439d0 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:05.544000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:05.595199 kernel: audit: type=1327 audit(1765585445.544:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:05.596478 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 00:24:05.598000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.601000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.613684 kernel: audit: type=1105 audit(1765585445.598:807): pid=5142 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.613776 kernel: audit: type=1103 audit(1765585445.601:808): pid=5146 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.682473 sshd[5146]: Connection closed by 10.0.0.1 port 33486 Dec 13 00:24:05.682791 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:05.682000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.687485 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:33486.service: Deactivated successfully. Dec 13 00:24:05.689844 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 00:24:05.682000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.691844 systemd-logind[1637]: Session 16 logged out. Waiting for processes to exit. Dec 13 00:24:05.692914 systemd-logind[1637]: Removed session 16. Dec 13 00:24:05.695867 kernel: audit: type=1106 audit(1765585445.682:809): pid=5142 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.695927 kernel: audit: type=1104 audit(1765585445.682:810): pid=5142 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:05.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.91:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:10.705422 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:45266.service - OpenSSH per-connection server daemon (10.0.0.1:45266). Dec 13 00:24:10.707337 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:24:10.707381 kernel: audit: type=1130 audit(1765585450.704:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.91:22-10.0.0.1:45266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:10.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.91:22-10.0.0.1:45266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:10.765000 audit[5161]: USER_ACCT pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.767386 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 45266 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:10.770017 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:10.767000 audit[5161]: CRED_ACQ pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.775354 systemd-logind[1637]: New session 17 of user core. Dec 13 00:24:10.778933 kernel: audit: type=1101 audit(1765585450.765:813): pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.778990 kernel: audit: type=1103 audit(1765585450.767:814): pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.779017 kernel: audit: type=1006 audit(1765585450.767:815): pid=5161 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 00:24:10.767000 audit[5161]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe08866170 a2=3 a3=0 items=0 ppid=1 pid=5161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:10.788671 kernel: audit: type=1300 audit(1765585450.767:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe08866170 a2=3 a3=0 items=0 ppid=1 pid=5161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:10.788720 kernel: audit: type=1327 audit(1765585450.767:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:10.767000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:10.798664 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 00:24:10.800000 audit[5161]: USER_START pid=5161 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.800000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.812299 kernel: audit: type=1105 audit(1765585450.800:816): pid=5161 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.812406 kernel: audit: type=1103 audit(1765585450.800:817): pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.877530 sshd[5165]: Connection closed by 10.0.0.1 port 45266 Dec 13 00:24:10.877850 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:10.877000 audit[5161]: USER_END pid=5161 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.883044 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:45266.service: Deactivated successfully. Dec 13 00:24:10.890710 kernel: audit: type=1106 audit(1765585450.877:818): pid=5161 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.890884 kernel: audit: type=1104 audit(1765585450.877:819): pid=5161 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.877000 audit[5161]: CRED_DISP pid=5161 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:10.886062 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 00:24:10.887155 systemd-logind[1637]: Session 17 logged out. Waiting for processes to exit. Dec 13 00:24:10.888604 systemd-logind[1637]: Removed session 17. Dec 13 00:24:10.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.91:22-10.0.0.1:45266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:12.018819 kubelet[2863]: E1213 00:24:12.018775 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" podUID="af29f9bb-907f-43a0-91d7-4904c3687176" Dec 13 00:24:13.692070 kubelet[2863]: E1213 00:24:13.692018 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:24:15.018518 kubelet[2863]: E1213 00:24:15.018429 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bb8446dc4-n7rmd" podUID="e9fa6631-f723-4789-af25-63888ed257d2" Dec 13 00:24:15.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.91:22-10.0.0.1:45278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:15.893100 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:45278.service - OpenSSH per-connection server daemon (10.0.0.1:45278). Dec 13 00:24:15.894710 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:24:15.894766 kernel: audit: type=1130 audit(1765585455.892:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.91:22-10.0.0.1:45278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:15.970000 audit[5206]: USER_ACCT pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:15.972253 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 45278 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:15.974725 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:15.972000 audit[5206]: CRED_ACQ pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:15.980230 systemd-logind[1637]: New session 18 of user core. Dec 13 00:24:15.981518 kernel: audit: type=1101 audit(1765585455.970:822): pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:15.981598 kernel: audit: type=1103 audit(1765585455.972:823): pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:15.981648 kernel: audit: type=1006 audit(1765585455.972:824): pid=5206 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 13 00:24:15.972000 audit[5206]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbcba6c20 a2=3 a3=0 items=0 ppid=1 pid=5206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:15.990157 kernel: audit: type=1300 audit(1765585455.972:824): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbcba6c20 a2=3 a3=0 items=0 ppid=1 pid=5206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:15.990291 kernel: audit: type=1327 audit(1765585455.972:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:15.972000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:15.993544 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 00:24:15.995000 audit[5206]: USER_START pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:15.997000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.007890 kernel: audit: type=1105 audit(1765585455.995:825): pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.008274 kernel: audit: type=1103 audit(1765585455.997:826): pid=5210 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.020026 kubelet[2863]: E1213 00:24:16.019944 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" podUID="2276dd1e-f4c8-4649-b959-dbfab02532d1" Dec 13 00:24:16.098172 sshd[5210]: Connection closed by 10.0.0.1 port 45278 Dec 13 00:24:16.098463 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:16.098000 audit[5206]: USER_END pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.098000 audit[5206]: CRED_DISP pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.110356 kernel: audit: type=1106 audit(1765585456.098:827): pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.110457 kernel: audit: type=1104 audit(1765585456.098:828): pid=5206 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.116449 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:45278.service: Deactivated successfully. Dec 13 00:24:16.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.91:22-10.0.0.1:45278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:16.118986 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 00:24:16.120009 systemd-logind[1637]: Session 18 logged out. Waiting for processes to exit. Dec 13 00:24:16.123975 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:45286.service - OpenSSH per-connection server daemon (10.0.0.1:45286). Dec 13 00:24:16.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.91:22-10.0.0.1:45286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:16.124958 systemd-logind[1637]: Removed session 18. Dec 13 00:24:16.186000 audit[5223]: USER_ACCT pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.187754 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 45286 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:16.187000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.187000 audit[5223]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcba682810 a2=3 a3=0 items=0 ppid=1 pid=5223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:16.187000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:16.190121 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:16.195029 systemd-logind[1637]: New session 19 of user core. Dec 13 00:24:16.203405 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 00:24:16.204000 audit[5223]: USER_START pid=5223 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.206000 audit[5227]: CRED_ACQ pid=5227 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.415507 sshd[5227]: Connection closed by 10.0.0.1 port 45286 Dec 13 00:24:16.415928 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:16.416000 audit[5223]: USER_END pid=5223 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.416000 audit[5223]: CRED_DISP pid=5223 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.431221 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:45286.service: Deactivated successfully. Dec 13 00:24:16.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.91:22-10.0.0.1:45286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:16.433714 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 00:24:16.434700 systemd-logind[1637]: Session 19 logged out. Waiting for processes to exit. Dec 13 00:24:16.437955 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:45298.service - OpenSSH per-connection server daemon (10.0.0.1:45298). Dec 13 00:24:16.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.91:22-10.0.0.1:45298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:16.439095 systemd-logind[1637]: Removed session 19. Dec 13 00:24:16.494000 audit[5239]: USER_ACCT pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.495621 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 45298 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:16.495000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.495000 audit[5239]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6dedc7c0 a2=3 a3=0 items=0 ppid=1 pid=5239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:16.495000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:16.498265 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:16.503174 systemd-logind[1637]: New session 20 of user core. Dec 13 00:24:16.510406 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 00:24:16.511000 audit[5239]: USER_START pid=5239 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.513000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.988000 audit[5255]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:24:16.988000 audit[5255]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe78232fb0 a2=0 a3=7ffe78232f9c items=0 ppid=2976 pid=5255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:16.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:24:16.995000 audit[5255]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:24:16.995000 audit[5255]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe78232fb0 a2=0 a3=0 items=0 ppid=2976 pid=5255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:16.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:24:16.998051 sshd[5243]: Connection closed by 10.0.0.1 port 45298 Dec 13 00:24:16.998091 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:16.999000 audit[5239]: USER_END pid=5239 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:16.999000 audit[5239]: CRED_DISP pid=5239 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.011476 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:45298.service: Deactivated successfully. Dec 13 00:24:17.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.91:22-10.0.0.1:45298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:17.015210 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 00:24:17.018086 systemd-logind[1637]: Session 20 logged out. Waiting for processes to exit. Dec 13 00:24:17.019974 kubelet[2863]: E1213 00:24:17.019486 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" podUID="3bbc635a-68f2-4b21-9037-215dfb791b81" Dec 13 00:24:17.021609 kubelet[2863]: E1213 00:24:17.021573 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:24:17.024798 kubelet[2863]: E1213 00:24:17.024726 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:24:17.025143 systemd-logind[1637]: Removed session 20. Dec 13 00:24:17.027767 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:45312.service - OpenSSH per-connection server daemon (10.0.0.1:45312). Dec 13 00:24:17.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.91:22-10.0.0.1:45312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:17.108000 audit[5260]: USER_ACCT pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.109825 sshd[5260]: Accepted publickey for core from 10.0.0.1 port 45312 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:17.109000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.110000 audit[5260]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9654bd80 a2=3 a3=0 items=0 ppid=1 pid=5260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:17.110000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:17.112723 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:17.118412 systemd-logind[1637]: New session 21 of user core. Dec 13 00:24:17.129419 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 00:24:17.131000 audit[5260]: USER_START pid=5260 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.133000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.325675 sshd[5264]: Connection closed by 10.0.0.1 port 45312 Dec 13 00:24:17.328524 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:17.328000 audit[5260]: USER_END pid=5260 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.329000 audit[5260]: CRED_DISP pid=5260 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.340806 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:45312.service: Deactivated successfully. Dec 13 00:24:17.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.91:22-10.0.0.1:45312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:17.343618 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 00:24:17.344810 systemd-logind[1637]: Session 21 logged out. Waiting for processes to exit. Dec 13 00:24:17.349070 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:45322.service - OpenSSH per-connection server daemon (10.0.0.1:45322). Dec 13 00:24:17.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.91:22-10.0.0.1:45322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:17.349852 systemd-logind[1637]: Removed session 21. Dec 13 00:24:17.414000 audit[5276]: USER_ACCT pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.416571 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 45322 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:17.417000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.417000 audit[5276]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedd334070 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:17.417000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:17.419720 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:17.425256 systemd-logind[1637]: New session 22 of user core. Dec 13 00:24:17.444454 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 00:24:17.445000 audit[5276]: USER_START pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.448000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.538434 sshd[5280]: Connection closed by 10.0.0.1 port 45322 Dec 13 00:24:17.538753 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:17.539000 audit[5276]: USER_END pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.539000 audit[5276]: CRED_DISP pid=5276 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:17.544688 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:45322.service: Deactivated successfully. Dec 13 00:24:17.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.91:22-10.0.0.1:45322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:17.547086 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 00:24:17.547975 systemd-logind[1637]: Session 22 logged out. Waiting for processes to exit. Dec 13 00:24:17.549507 systemd-logind[1637]: Removed session 22. Dec 13 00:24:18.013000 audit[5294]: NETFILTER_CFG table=filter:137 family=2 entries=38 op=nft_register_rule pid=5294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:24:18.013000 audit[5294]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffebf11c570 a2=0 a3=7ffebf11c55c items=0 ppid=2976 pid=5294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:18.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:24:18.018508 kubelet[2863]: E1213 00:24:18.018216 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:24:18.020538 kubelet[2863]: E1213 00:24:18.020493 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8shpw" podUID="1fd617ff-92d1-4ae1-9f14-72f718d2a63a" Dec 13 00:24:18.027000 audit[5294]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:24:18.027000 audit[5294]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffebf11c570 a2=0 a3=0 items=0 ppid=2976 pid=5294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:18.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:24:22.555773 systemd[1]: Started sshd@21-10.0.0.91:22-10.0.0.1:43290.service - OpenSSH per-connection server daemon (10.0.0.1:43290). Dec 13 00:24:22.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.91:22-10.0.0.1:43290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:22.557293 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 00:24:22.557352 kernel: audit: type=1130 audit(1765585462.554:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.91:22-10.0.0.1:43290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:22.628000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.632992 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:22.633997 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 43290 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:22.635284 kernel: audit: type=1101 audit(1765585462.628:871): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.635415 kernel: audit: type=1103 audit(1765585462.629:872): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.629000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.641974 systemd-logind[1637]: New session 23 of user core. Dec 13 00:24:22.644453 kernel: audit: type=1006 audit(1765585462.630:873): pid=5299 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 00:24:22.644498 kernel: audit: type=1300 audit(1765585462.630:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff818735b0 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:22.630000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff818735b0 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:22.630000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:22.653006 kernel: audit: type=1327 audit(1765585462.630:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:22.661676 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 00:24:22.663000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.666000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.678171 kernel: audit: type=1105 audit(1765585462.663:874): pid=5299 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.678358 kernel: audit: type=1103 audit(1765585462.666:875): pid=5303 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.707000 audit[5313]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:24:22.718862 kernel: audit: type=1325 audit(1765585462.707:876): table=filter:139 family=2 entries=26 op=nft_register_rule pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:24:22.719012 kernel: audit: type=1300 audit(1765585462.707:876): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd7a89c770 a2=0 a3=7ffd7a89c75c items=0 ppid=2976 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:22.707000 audit[5313]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd7a89c770 a2=0 a3=7ffd7a89c75c items=0 ppid=2976 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:22.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:24:22.721000 audit[5313]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:24:22.721000 audit[5313]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd7a89c770 a2=0 a3=7ffd7a89c75c items=0 ppid=2976 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:22.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:24:22.768884 sshd[5303]: Connection closed by 10.0.0.1 port 43290 Dec 13 00:24:22.769319 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:22.769000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.769000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:22.775307 systemd[1]: sshd@21-10.0.0.91:22-10.0.0.1:43290.service: Deactivated successfully. Dec 13 00:24:22.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.91:22-10.0.0.1:43290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:22.778645 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 00:24:22.779980 systemd-logind[1637]: Session 23 logged out. Waiting for processes to exit. Dec 13 00:24:22.781619 systemd-logind[1637]: Removed session 23. Dec 13 00:24:23.017054 kubelet[2863]: E1213 00:24:23.017005 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:24:26.018038 containerd[1654]: time="2025-12-13T00:24:26.017968630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:24:26.352161 containerd[1654]: time="2025-12-13T00:24:26.351966096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:26.353370 containerd[1654]: time="2025-12-13T00:24:26.353325557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:24:26.353430 containerd[1654]: time="2025-12-13T00:24:26.353369641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:26.353663 kubelet[2863]: E1213 00:24:26.353599 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:24:26.354028 kubelet[2863]: E1213 00:24:26.353665 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:24:26.354028 kubelet[2863]: E1213 00:24:26.353770 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59b87945cd-4dq42_calico-apiserver(af29f9bb-907f-43a0-91d7-4904c3687176): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:26.354028 kubelet[2863]: E1213 00:24:26.353807 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-4dq42" podUID="af29f9bb-907f-43a0-91d7-4904c3687176" Dec 13 00:24:27.786668 systemd[1]: Started sshd@22-10.0.0.91:22-10.0.0.1:43300.service - OpenSSH per-connection server daemon (10.0.0.1:43300). Dec 13 00:24:27.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.91:22-10.0.0.1:43300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:27.788228 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 00:24:27.788317 kernel: audit: type=1130 audit(1765585467.785:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.91:22-10.0.0.1:43300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:27.870000 audit[5325]: USER_ACCT pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.871876 sshd[5325]: Accepted publickey for core from 10.0.0.1 port 43300 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:27.874807 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:27.872000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.880760 systemd-logind[1637]: New session 24 of user core. Dec 13 00:24:27.881477 kernel: audit: type=1101 audit(1765585467.870:882): pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.881518 kernel: audit: type=1103 audit(1765585467.872:883): pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.881544 kernel: audit: type=1006 audit(1765585467.872:884): pid=5325 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 00:24:27.884357 kernel: audit: type=1300 audit(1765585467.872:884): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd12e3bbc0 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:27.872000 audit[5325]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd12e3bbc0 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:27.889589 kernel: audit: type=1327 audit(1765585467.872:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:27.872000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:27.893406 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 00:24:27.895000 audit[5325]: USER_START pid=5325 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.897000 audit[5329]: CRED_ACQ pid=5329 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.908312 kernel: audit: type=1105 audit(1765585467.895:885): pid=5325 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.908356 kernel: audit: type=1103 audit(1765585467.897:886): pid=5329 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.985604 sshd[5329]: Connection closed by 10.0.0.1 port 43300 Dec 13 00:24:27.985934 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:27.986000 audit[5325]: USER_END pid=5325 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.992368 systemd[1]: sshd@22-10.0.0.91:22-10.0.0.1:43300.service: Deactivated successfully. Dec 13 00:24:27.986000 audit[5325]: CRED_DISP pid=5325 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.995174 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 00:24:27.996590 systemd-logind[1637]: Session 24 logged out. Waiting for processes to exit. Dec 13 00:24:27.997797 kernel: audit: type=1106 audit(1765585467.986:887): pid=5325 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.997865 kernel: audit: type=1104 audit(1765585467.986:888): pid=5325 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:27.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.91:22-10.0.0.1:43300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:27.998906 systemd-logind[1637]: Removed session 24. Dec 13 00:24:28.023768 containerd[1654]: time="2025-12-13T00:24:28.023703860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:24:28.360858 containerd[1654]: time="2025-12-13T00:24:28.360708353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:28.362743 containerd[1654]: time="2025-12-13T00:24:28.362657202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:24:28.362842 containerd[1654]: time="2025-12-13T00:24:28.362760037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:28.363053 kubelet[2863]: E1213 00:24:28.362975 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:24:28.363053 kubelet[2863]: E1213 00:24:28.363040 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:24:28.363599 kubelet[2863]: E1213 00:24:28.363319 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54554ff9b-xwvbk_calico-system(3bbc635a-68f2-4b21-9037-215dfb791b81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:28.363599 kubelet[2863]: E1213 00:24:28.363478 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54554ff9b-xwvbk" podUID="3bbc635a-68f2-4b21-9037-215dfb791b81" Dec 13 00:24:28.363672 containerd[1654]: time="2025-12-13T00:24:28.363472298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:24:28.679785 containerd[1654]: time="2025-12-13T00:24:28.679534440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:28.680917 containerd[1654]: time="2025-12-13T00:24:28.680873431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:24:28.681020 containerd[1654]: time="2025-12-13T00:24:28.680969424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:28.681213 kubelet[2863]: E1213 00:24:28.681164 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:24:28.681408 kubelet[2863]: E1213 00:24:28.681223 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:24:28.681408 kubelet[2863]: E1213 00:24:28.681326 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-tp27z_calico-system(43eaf899-3f04-44ec-95d7-4d02448959a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:28.682647 containerd[1654]: time="2025-12-13T00:24:28.682606059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:24:29.040786 containerd[1654]: time="2025-12-13T00:24:29.040718895Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:29.179592 containerd[1654]: time="2025-12-13T00:24:29.179506772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:29.179789 containerd[1654]: time="2025-12-13T00:24:29.179544274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:24:29.179904 kubelet[2863]: E1213 00:24:29.179850 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:24:29.179958 kubelet[2863]: E1213 00:24:29.179904 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:24:29.180219 kubelet[2863]: E1213 00:24:29.180167 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-tp27z_calico-system(43eaf899-3f04-44ec-95d7-4d02448959a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:29.180336 kubelet[2863]: E1213 00:24:29.180272 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tp27z" podUID="43eaf899-3f04-44ec-95d7-4d02448959a8" Dec 13 00:24:29.180425 containerd[1654]: time="2025-12-13T00:24:29.180291140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:24:29.623061 containerd[1654]: time="2025-12-13T00:24:29.622968808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:29.681492 containerd[1654]: time="2025-12-13T00:24:29.681338765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:24:29.681492 containerd[1654]: time="2025-12-13T00:24:29.681424828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:29.681756 kubelet[2863]: E1213 00:24:29.681657 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:24:29.681756 kubelet[2863]: E1213 00:24:29.681714 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:24:29.682192 kubelet[2863]: E1213 00:24:29.681826 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-59b87945cd-vf98m_calico-apiserver(2276dd1e-f4c8-4649-b959-dbfab02532d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:29.682192 kubelet[2863]: E1213 00:24:29.681871 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59b87945cd-vf98m" podUID="2276dd1e-f4c8-4649-b959-dbfab02532d1" Dec 13 00:24:30.018993 containerd[1654]: time="2025-12-13T00:24:30.018931704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:24:30.420216 containerd[1654]: time="2025-12-13T00:24:30.420056286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:30.513899 containerd[1654]: time="2025-12-13T00:24:30.513798281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:30.514068 containerd[1654]: time="2025-12-13T00:24:30.513881388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:24:30.514251 kubelet[2863]: E1213 00:24:30.514159 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:24:30.514251 kubelet[2863]: E1213 00:24:30.514226 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:24:30.514488 kubelet[2863]: E1213 00:24:30.514452 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8shpw_calico-system(1fd617ff-92d1-4ae1-9f14-72f718d2a63a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:30.514568 kubelet[2863]: E1213 00:24:30.514515 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8shpw" podUID="1fd617ff-92d1-4ae1-9f14-72f718d2a63a" Dec 13 00:24:30.514655 containerd[1654]: time="2025-12-13T00:24:30.514624859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:24:30.970918 containerd[1654]: time="2025-12-13T00:24:30.970818406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:30.972889 containerd[1654]: time="2025-12-13T00:24:30.972853156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:24:30.972989 containerd[1654]: time="2025-12-13T00:24:30.972933929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:30.973179 kubelet[2863]: E1213 00:24:30.973127 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:24:30.973573 kubelet[2863]: E1213 00:24:30.973185 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:24:30.973573 kubelet[2863]: E1213 00:24:30.973303 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6bb8446dc4-n7rmd_calico-system(e9fa6631-f723-4789-af25-63888ed257d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:30.974379 containerd[1654]: time="2025-12-13T00:24:30.974324797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:24:31.304380 containerd[1654]: time="2025-12-13T00:24:31.304304949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:24:31.411751 containerd[1654]: time="2025-12-13T00:24:31.411622524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:24:31.411751 containerd[1654]: time="2025-12-13T00:24:31.411708678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:24:31.412148 kubelet[2863]: E1213 00:24:31.412085 2863 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:24:31.412219 kubelet[2863]: E1213 00:24:31.412153 2863 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:24:31.412319 kubelet[2863]: E1213 00:24:31.412260 2863 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6bb8446dc4-n7rmd_calico-system(e9fa6631-f723-4789-af25-63888ed257d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:24:31.412372 kubelet[2863]: E1213 00:24:31.412303 2863 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bb8446dc4-n7rmd" podUID="e9fa6631-f723-4789-af25-63888ed257d2" Dec 13 00:24:32.351589 update_engine[1639]: I20251213 00:24:32.351482 1639 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 00:24:32.351589 update_engine[1639]: I20251213 00:24:32.351588 1639 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 00:24:32.353645 update_engine[1639]: I20251213 00:24:32.353610 1639 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 00:24:32.354189 update_engine[1639]: I20251213 00:24:32.354157 1639 omaha_request_params.cc:62] Current group set to alpha Dec 13 00:24:32.354376 update_engine[1639]: I20251213 00:24:32.354323 1639 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 00:24:32.354376 update_engine[1639]: I20251213 00:24:32.354357 1639 update_attempter.cc:643] Scheduling an action processor start. Dec 13 00:24:32.354471 update_engine[1639]: I20251213 00:24:32.354383 1639 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 00:24:32.354471 update_engine[1639]: I20251213 00:24:32.354455 1639 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 00:24:32.354543 update_engine[1639]: I20251213 00:24:32.354526 1639 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 00:24:32.354543 update_engine[1639]: I20251213 00:24:32.354535 1639 omaha_request_action.cc:272] Request: Dec 13 00:24:32.354543 update_engine[1639]: Dec 13 00:24:32.354543 update_engine[1639]: Dec 13 00:24:32.354543 update_engine[1639]: Dec 13 00:24:32.354543 update_engine[1639]: Dec 13 00:24:32.354543 update_engine[1639]: Dec 13 00:24:32.354543 update_engine[1639]: Dec 13 00:24:32.354543 update_engine[1639]: Dec 13 00:24:32.354543 update_engine[1639]: Dec 13 00:24:32.354827 update_engine[1639]: I20251213 00:24:32.354547 1639 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 00:24:32.360040 update_engine[1639]: I20251213 00:24:32.359923 1639 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 00:24:32.361609 update_engine[1639]: I20251213 00:24:32.361055 1639 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 00:24:32.367402 update_engine[1639]: E20251213 00:24:32.367322 1639 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 13 00:24:32.367560 update_engine[1639]: I20251213 00:24:32.367441 1639 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 00:24:32.412396 locksmithd[1706]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 00:24:32.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.91:22-10.0.0.1:47082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:32.997996 systemd[1]: Started sshd@23-10.0.0.91:22-10.0.0.1:47082.service - OpenSSH per-connection server daemon (10.0.0.1:47082). Dec 13 00:24:33.003971 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:24:33.004444 kernel: audit: type=1130 audit(1765585472.997:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.91:22-10.0.0.1:47082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:33.017422 kubelet[2863]: E1213 00:24:33.017375 2863 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:24:33.079000 audit[5344]: USER_ACCT pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.080405 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 47082 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:24:33.083512 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:24:33.081000 audit[5344]: CRED_ACQ pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.091120 kernel: audit: type=1101 audit(1765585473.079:891): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.091283 kernel: audit: type=1103 audit(1765585473.081:892): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.091136 systemd-logind[1637]: New session 25 of user core. Dec 13 00:24:33.100875 kernel: audit: type=1006 audit(1765585473.081:893): pid=5344 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 00:24:33.101012 kernel: audit: type=1300 audit(1765585473.081:893): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0a3ac510 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:33.081000 audit[5344]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0a3ac510 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:24:33.097479 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 00:24:33.081000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:33.104298 kernel: audit: type=1327 audit(1765585473.081:893): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:24:33.104000 audit[5344]: USER_START pid=5344 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.108000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.117403 kernel: audit: type=1105 audit(1765585473.104:894): pid=5344 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.117457 kernel: audit: type=1103 audit(1765585473.108:895): pid=5348 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.205750 sshd[5348]: Connection closed by 10.0.0.1 port 47082 Dec 13 00:24:33.206445 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Dec 13 00:24:33.208000 audit[5344]: USER_END pid=5344 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.213880 systemd-logind[1637]: Session 25 logged out. Waiting for processes to exit. Dec 13 00:24:33.208000 audit[5344]: CRED_DISP pid=5344 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.214898 systemd[1]: sshd@23-10.0.0.91:22-10.0.0.1:47082.service: Deactivated successfully. Dec 13 00:24:33.217661 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 00:24:33.218856 kernel: audit: type=1106 audit(1765585473.208:896): pid=5344 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.218942 kernel: audit: type=1104 audit(1765585473.208:897): pid=5344 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:24:33.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.91:22-10.0.0.1:47082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:33.219294 systemd-logind[1637]: Removed session 25.