Dec 16 03:16:23.418170 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:16:23.418196 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:16:23.418208 kernel: BIOS-provided physical RAM map: Dec 16 03:16:23.418215 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 03:16:23.418222 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 03:16:23.418228 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 03:16:23.418236 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 16 03:16:23.418243 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 16 03:16:23.418255 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 03:16:23.418262 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 03:16:23.418271 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:16:23.418278 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 03:16:23.418286 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 03:16:23.418295 kernel: NX (Execute Disable) protection: active Dec 16 03:16:23.418307 kernel: APIC: Static calls initialized Dec 16 03:16:23.418319 kernel: SMBIOS 2.8 present. Dec 16 03:16:23.418333 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 16 03:16:23.418343 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:16:23.418353 kernel: Hypervisor detected: KVM Dec 16 03:16:23.418363 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 16 03:16:23.418373 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:16:23.418384 kernel: kvm-clock: using sched offset of 4207333964 cycles Dec 16 03:16:23.418395 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:16:23.418406 kernel: tsc: Detected 2794.748 MHz processor Dec 16 03:16:23.418419 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:16:23.418436 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:16:23.418448 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 16 03:16:23.418459 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 03:16:23.418470 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:16:23.418481 kernel: Using GB pages for direct mapping Dec 16 03:16:23.418492 kernel: ACPI: Early table checksum verification disabled Dec 16 03:16:23.418507 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 16 03:16:23.418518 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:16:23.418529 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:16:23.418539 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:16:23.418549 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 16 03:16:23.418560 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:16:23.418571 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:16:23.418584 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:16:23.418594 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:16:23.418608 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Dec 16 03:16:23.418619 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Dec 16 03:16:23.418630 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 16 03:16:23.418640 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Dec 16 03:16:23.418653 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Dec 16 03:16:23.418663 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Dec 16 03:16:23.418673 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Dec 16 03:16:23.418684 kernel: No NUMA configuration found Dec 16 03:16:23.418696 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 16 03:16:23.418707 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Dec 16 03:16:23.418720 kernel: Zone ranges: Dec 16 03:16:23.418732 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:16:23.418743 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 16 03:16:23.418754 kernel: Normal empty Dec 16 03:16:23.418765 kernel: Device empty Dec 16 03:16:23.418776 kernel: Movable zone start for each node Dec 16 03:16:23.418787 kernel: Early memory node ranges Dec 16 03:16:23.418801 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 03:16:23.418812 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 16 03:16:23.418824 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 16 03:16:23.418835 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:16:23.418846 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 03:16:23.418857 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 16 03:16:23.418874 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 03:16:23.418895 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:16:23.418909 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:16:23.418920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 03:16:23.418934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:16:23.418945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:16:23.418957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:16:23.418968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:16:23.418979 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:16:23.418993 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 03:16:23.419004 kernel: TSC deadline timer available Dec 16 03:16:23.419015 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:16:23.419026 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:16:23.419037 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:16:23.419048 kernel: CPU topo: Max. threads per core: 1 Dec 16 03:16:23.419059 kernel: CPU topo: Num. cores per package: 4 Dec 16 03:16:23.419070 kernel: CPU topo: Num. threads per package: 4 Dec 16 03:16:23.419083 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 03:16:23.419109 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 03:16:23.419120 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 03:16:23.419131 kernel: kvm-guest: setup PV sched yield Dec 16 03:16:23.419142 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 03:16:23.419153 kernel: Booting paravirtualized kernel on KVM Dec 16 03:16:23.419164 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:16:23.419179 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 03:16:23.419191 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 03:16:23.419202 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 03:16:23.419213 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 03:16:23.419224 kernel: kvm-guest: PV spinlocks enabled Dec 16 03:16:23.419235 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 03:16:23.419247 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:16:23.419261 kernel: random: crng init done Dec 16 03:16:23.419272 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 03:16:23.419284 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 03:16:23.419295 kernel: Fallback order for Node 0: 0 Dec 16 03:16:23.419306 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Dec 16 03:16:23.419317 kernel: Policy zone: DMA32 Dec 16 03:16:23.419328 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:16:23.419342 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 03:16:23.419353 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:16:23.419365 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:16:23.419376 kernel: Dynamic Preempt: voluntary Dec 16 03:16:23.419387 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:16:23.419399 kernel: rcu: RCU event tracing is enabled. Dec 16 03:16:23.419410 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 03:16:23.419425 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:16:23.419439 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:16:23.419450 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:16:23.419462 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:16:23.419473 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 03:16:23.419484 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:16:23.419496 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:16:23.419507 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:16:23.419520 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 03:16:23.419532 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:16:23.419551 kernel: Console: colour VGA+ 80x25 Dec 16 03:16:23.419565 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:16:23.419577 kernel: ACPI: Core revision 20240827 Dec 16 03:16:23.419589 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 03:16:23.419601 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:16:23.419615 kernel: x2apic enabled Dec 16 03:16:23.419629 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:16:23.419651 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 03:16:23.419666 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 03:16:23.419681 kernel: kvm-guest: setup PV IPIs Dec 16 03:16:23.419695 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 03:16:23.419713 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 03:16:23.419728 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 16 03:16:23.419743 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 03:16:23.419757 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 03:16:23.419769 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 03:16:23.419781 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:16:23.419793 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:16:23.419807 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:16:23.419818 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 03:16:23.419830 kernel: active return thunk: retbleed_return_thunk Dec 16 03:16:23.419842 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 03:16:23.419854 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 03:16:23.419866 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 03:16:23.419886 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 03:16:23.419908 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 03:16:23.419920 kernel: active return thunk: srso_return_thunk Dec 16 03:16:23.419932 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 03:16:23.419943 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:16:23.419955 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:16:23.419966 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:16:23.419982 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:16:23.419993 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 03:16:23.420005 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:16:23.420016 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:16:23.420027 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:16:23.420038 kernel: landlock: Up and running. Dec 16 03:16:23.420049 kernel: SELinux: Initializing. Dec 16 03:16:23.420066 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 03:16:23.420081 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 03:16:23.420110 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 03:16:23.420122 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 03:16:23.420134 kernel: ... version: 0 Dec 16 03:16:23.420145 kernel: ... bit width: 48 Dec 16 03:16:23.420157 kernel: ... generic registers: 6 Dec 16 03:16:23.420168 kernel: ... value mask: 0000ffffffffffff Dec 16 03:16:23.420184 kernel: ... max period: 00007fffffffffff Dec 16 03:16:23.420195 kernel: ... fixed-purpose events: 0 Dec 16 03:16:23.420213 kernel: ... event mask: 000000000000003f Dec 16 03:16:23.420227 kernel: signal: max sigframe size: 1776 Dec 16 03:16:23.420239 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:16:23.420251 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:16:23.420263 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:16:23.420279 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:16:23.420295 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:16:23.420312 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 03:16:23.420323 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 03:16:23.420334 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 16 03:16:23.420346 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 120520K reserved, 0K cma-reserved) Dec 16 03:16:23.420357 kernel: devtmpfs: initialized Dec 16 03:16:23.420372 kernel: x86/mm: Memory block size: 128MB Dec 16 03:16:23.420383 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:16:23.420394 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 03:16:23.420405 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:16:23.420416 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:16:23.420427 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:16:23.420439 kernel: audit: type=2000 audit(1765854979.910:1): state=initialized audit_enabled=0 res=1 Dec 16 03:16:23.420453 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:16:23.420464 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:16:23.420476 kernel: cpuidle: using governor menu Dec 16 03:16:23.420486 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:16:23.420497 kernel: dca service started, version 1.12.1 Dec 16 03:16:23.420508 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 03:16:23.420519 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 03:16:23.420534 kernel: PCI: Using configuration type 1 for base access Dec 16 03:16:23.420545 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:16:23.420556 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 03:16:23.420567 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 03:16:23.420578 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:16:23.420590 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:16:23.420608 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:16:23.420625 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:16:23.420637 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:16:23.420649 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 03:16:23.420661 kernel: ACPI: Interpreter enabled Dec 16 03:16:23.420672 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 03:16:23.420684 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:16:23.420699 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:16:23.420714 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 03:16:23.420732 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 03:16:23.420746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:16:23.421072 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:16:23.421374 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 03:16:23.421572 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 03:16:23.421591 kernel: PCI host bridge to bus 0000:00 Dec 16 03:16:23.424274 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:16:23.424481 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:16:23.424675 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:16:23.424869 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 16 03:16:23.425081 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 03:16:23.425305 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 16 03:16:23.425498 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:16:23.425731 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:16:23.425972 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 03:16:23.426214 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 16 03:16:23.426417 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 16 03:16:23.426608 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 16 03:16:23.426800 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 03:16:23.427022 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 03:16:23.427242 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 03:16:23.427441 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 16 03:16:23.427643 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 16 03:16:23.427847 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:16:23.428056 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Dec 16 03:16:23.428277 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 16 03:16:23.428471 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 16 03:16:23.428688 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:16:23.428919 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Dec 16 03:16:23.429850 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Dec 16 03:16:23.430116 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 16 03:16:23.430291 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 16 03:16:23.430483 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:16:23.430654 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 03:16:23.430845 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 03:16:23.431023 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Dec 16 03:16:23.431214 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Dec 16 03:16:23.431417 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 03:16:23.431643 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 03:16:23.431662 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:16:23.431672 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:16:23.431681 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:16:23.431690 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:16:23.431700 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 03:16:23.431709 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 03:16:23.431718 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 03:16:23.431732 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 03:16:23.431743 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 03:16:23.431754 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 03:16:23.431765 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 03:16:23.431776 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 03:16:23.431787 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 03:16:23.431799 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 03:16:23.431812 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 03:16:23.431823 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 03:16:23.431834 kernel: iommu: Default domain type: Translated Dec 16 03:16:23.431845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:16:23.431857 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:16:23.431868 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:16:23.431895 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 03:16:23.431908 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 16 03:16:23.432083 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 03:16:23.432267 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 03:16:23.432432 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 03:16:23.432447 kernel: vgaarb: loaded Dec 16 03:16:23.432460 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 03:16:23.432471 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 03:16:23.432483 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:16:23.432492 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:16:23.432501 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:16:23.432510 kernel: pnp: PnP ACPI init Dec 16 03:16:23.432695 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 03:16:23.432708 kernel: pnp: PnP ACPI: found 6 devices Dec 16 03:16:23.432720 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:16:23.432729 kernel: NET: Registered PF_INET protocol family Dec 16 03:16:23.432738 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 03:16:23.432748 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 03:16:23.432757 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:16:23.432766 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 03:16:23.432775 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 03:16:23.432854 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 03:16:23.432871 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 03:16:23.432896 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 03:16:23.432910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:16:23.432922 kernel: NET: Registered PF_XDP protocol family Dec 16 03:16:23.433187 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:16:23.433384 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:16:23.433580 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:16:23.433791 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 16 03:16:23.434022 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 03:16:23.434228 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 16 03:16:23.434242 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:16:23.434253 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 03:16:23.434262 kernel: Initialise system trusted keyrings Dec 16 03:16:23.434276 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 03:16:23.434285 kernel: Key type asymmetric registered Dec 16 03:16:23.434293 kernel: Asymmetric key parser 'x509' registered Dec 16 03:16:23.434302 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:16:23.434311 kernel: io scheduler mq-deadline registered Dec 16 03:16:23.434320 kernel: io scheduler kyber registered Dec 16 03:16:23.434329 kernel: io scheduler bfq registered Dec 16 03:16:23.434340 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:16:23.434350 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 03:16:23.434359 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 03:16:23.434368 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 03:16:23.434377 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:16:23.434386 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:16:23.434395 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:16:23.434406 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:16:23.434414 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:16:23.434594 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 03:16:23.434608 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:16:23.434769 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 03:16:23.434970 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T03:16:21 UTC (1765854981) Dec 16 03:16:23.435489 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 03:16:23.435571 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 03:16:23.435599 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:16:23.435684 kernel: Segment Routing with IPv6 Dec 16 03:16:23.435711 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:16:23.435738 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:16:23.435765 kernel: Key type dns_resolver registered Dec 16 03:16:23.435855 kernel: IPI shorthand broadcast: enabled Dec 16 03:16:23.435868 kernel: sched_clock: Marking stable (1732003481, 282305426)->(2232617915, -218309008) Dec 16 03:16:23.435886 kernel: registered taskstats version 1 Dec 16 03:16:23.435896 kernel: Loading compiled-in X.509 certificates Dec 16 03:16:23.435905 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:16:23.435914 kernel: Demotion targets for Node 0: null Dec 16 03:16:23.435923 kernel: Key type .fscrypt registered Dec 16 03:16:23.435933 kernel: Key type fscrypt-provisioning registered Dec 16 03:16:23.435944 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:16:23.435953 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:16:23.435962 kernel: ima: No architecture policies found Dec 16 03:16:23.435971 kernel: clk: Disabling unused clocks Dec 16 03:16:23.435980 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:16:23.435989 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:16:23.435998 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:16:23.436009 kernel: Run /init as init process Dec 16 03:16:23.436018 kernel: with arguments: Dec 16 03:16:23.436027 kernel: /init Dec 16 03:16:23.436036 kernel: with environment: Dec 16 03:16:23.436045 kernel: HOME=/ Dec 16 03:16:23.436053 kernel: TERM=linux Dec 16 03:16:23.436062 kernel: SCSI subsystem initialized Dec 16 03:16:23.436073 kernel: libata version 3.00 loaded. Dec 16 03:16:23.436304 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 03:16:23.436344 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 03:16:23.436559 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 03:16:23.436806 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 03:16:23.437149 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 03:16:23.437533 kernel: scsi host0: ahci Dec 16 03:16:23.437935 kernel: scsi host1: ahci Dec 16 03:16:23.438304 kernel: scsi host2: ahci Dec 16 03:16:23.438539 kernel: scsi host3: ahci Dec 16 03:16:23.438819 kernel: scsi host4: ahci Dec 16 03:16:23.439073 kernel: scsi host5: ahci Dec 16 03:16:23.439111 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Dec 16 03:16:23.439125 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Dec 16 03:16:23.439136 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Dec 16 03:16:23.439148 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Dec 16 03:16:23.439161 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Dec 16 03:16:23.439173 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Dec 16 03:16:23.439188 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 03:16:23.439200 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 03:16:23.439212 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 03:16:23.439224 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 03:16:23.439237 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 03:16:23.439249 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 03:16:23.439261 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 03:16:23.439275 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 03:16:23.439287 kernel: ata3.00: applying bridge limits Dec 16 03:16:23.439300 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 03:16:23.439311 kernel: ata3.00: configured for UDMA/100 Dec 16 03:16:23.439562 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 03:16:23.439814 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 03:16:23.440067 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 16 03:16:23.440085 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:16:23.440171 kernel: GPT:16515071 != 27000831 Dec 16 03:16:23.440183 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:16:23.440195 kernel: GPT:16515071 != 27000831 Dec 16 03:16:23.440207 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:16:23.440219 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 03:16:23.440451 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 03:16:23.440469 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 03:16:23.440696 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 03:16:23.440716 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:16:23.440729 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:16:23.440741 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:16:23.440754 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:16:23.440772 kernel: raid6: avx2x4 gen() 20034 MB/s Dec 16 03:16:23.440784 kernel: raid6: avx2x2 gen() 19911 MB/s Dec 16 03:16:23.440797 kernel: raid6: avx2x1 gen() 20627 MB/s Dec 16 03:16:23.440812 kernel: raid6: using algorithm avx2x1 gen() 20627 MB/s Dec 16 03:16:23.440826 kernel: raid6: .... xor() 14594 MB/s, rmw enabled Dec 16 03:16:23.440838 kernel: raid6: using avx2x2 recovery algorithm Dec 16 03:16:23.440850 kernel: xor: automatically using best checksumming function avx Dec 16 03:16:23.440863 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:16:23.440875 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (180) Dec 16 03:16:23.440902 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:16:23.440915 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:16:23.440930 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:16:23.440942 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:16:23.440954 kernel: loop: module loaded Dec 16 03:16:23.440966 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:16:23.440979 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:16:23.440993 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:16:23.441010 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:16:23.441028 systemd[1]: Detected virtualization kvm. Dec 16 03:16:23.441040 systemd[1]: Detected architecture x86-64. Dec 16 03:16:23.441053 systemd[1]: Running in initrd. Dec 16 03:16:23.441066 systemd[1]: No hostname configured, using default hostname. Dec 16 03:16:23.441079 systemd[1]: Hostname set to . Dec 16 03:16:23.441109 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:16:23.441126 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:16:23.441139 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:16:23.441152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:16:23.441165 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:16:23.441179 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:16:23.441193 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:16:23.441210 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:16:23.441223 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:16:23.441236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:16:23.441249 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:16:23.441265 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:16:23.441278 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:16:23.441294 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:16:23.441307 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:16:23.441320 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:16:23.441333 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:16:23.441346 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:16:23.441359 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:16:23.441373 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:16:23.441389 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:16:23.441403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:16:23.441415 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:16:23.441429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:16:23.441442 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:16:23.441456 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:16:23.441472 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:16:23.441485 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:16:23.441499 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:16:23.441513 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:16:23.441527 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:16:23.441540 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:16:23.441553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:16:23.441570 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:16:23.441584 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:16:23.441597 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:16:23.441610 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:16:23.441627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:16:23.441683 systemd-journald[316]: Collecting audit messages is enabled. Dec 16 03:16:23.441721 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:16:23.441742 systemd-journald[316]: Journal started Dec 16 03:16:23.441774 systemd-journald[316]: Runtime Journal (/run/log/journal/51ee99e499f14e01a7ef9182a710c41c) is 6M, max 48.2M, 42.1M free. Dec 16 03:16:23.442108 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:16:23.442862 systemd-modules-load[319]: Inserted module 'br_netfilter' Dec 16 03:16:23.535521 kernel: Bridge firewalling registered Dec 16 03:16:23.535587 kernel: audit: type=1130 audit(1765854983.524:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.535618 kernel: audit: type=1130 audit(1765854983.529:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.525230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:16:23.545395 kernel: audit: type=1130 audit(1765854983.536:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.535622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:16:23.554143 kernel: audit: type=1130 audit(1765854983.546:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.545429 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:16:23.557333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:16:23.558737 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:16:23.582046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:16:23.583785 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:16:23.601551 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:16:23.608823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:16:23.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.614712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:16:23.626201 kernel: audit: type=1130 audit(1765854983.614:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.626254 kernel: audit: type=1130 audit(1765854983.618:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.626282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:16:23.634050 kernel: audit: type=1130 audit(1765854983.626:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.634126 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:16:23.642482 kernel: audit: type=1130 audit(1765854983.634:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.644193 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:16:23.649049 kernel: audit: type=1334 audit(1765854983.645:10): prog-id=6 op=LOAD Dec 16 03:16:23.645000 audit: BPF prog-id=6 op=LOAD Dec 16 03:16:23.648238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:16:23.693308 dracut-cmdline[354]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:16:23.732686 systemd-resolved[355]: Positive Trust Anchors: Dec 16 03:16:23.732706 systemd-resolved[355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:16:23.732712 systemd-resolved[355]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:16:23.732754 systemd-resolved[355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:16:23.757779 systemd-resolved[355]: Defaulting to hostname 'linux'. Dec 16 03:16:23.760966 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:16:23.763304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:16:23.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.854140 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:16:23.874139 kernel: iscsi: registered transport (tcp) Dec 16 03:16:23.914641 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:16:23.914744 kernel: QLogic iSCSI HBA Driver Dec 16 03:16:23.949117 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:16:23.981408 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:16:23.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:23.987764 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:16:24.049342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:16:24.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.054735 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:16:24.059369 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:16:24.102349 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:16:24.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.174000 audit: BPF prog-id=7 op=LOAD Dec 16 03:16:24.174000 audit: BPF prog-id=8 op=LOAD Dec 16 03:16:24.175072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:16:24.196266 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:16:24.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.220939 systemd-udevd[672]: Using default interface naming scheme 'v257'. Dec 16 03:16:24.238476 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:16:24.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.245305 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:16:24.250000 audit: BPF prog-id=9 op=LOAD Dec 16 03:16:24.260140 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:16:24.280279 dracut-pre-trigger[696]: rd.md=0: removing MD RAID activation Dec 16 03:16:24.320547 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:16:24.328660 systemd-networkd[697]: lo: Link UP Dec 16 03:16:24.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.328664 systemd-networkd[697]: lo: Gained carrier Dec 16 03:16:24.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.331876 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:16:24.333122 systemd[1]: Reached target network.target - Network. Dec 16 03:16:24.340772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:16:24.453408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:16:24.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.459497 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:16:24.534462 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 03:16:24.560487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 03:16:24.595118 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:16:24.597249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:16:24.606858 systemd-networkd[697]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:16:24.631056 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 03:16:24.606875 systemd-networkd[697]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:16:24.608595 systemd-networkd[697]: eth0: Link UP Dec 16 03:16:24.609356 systemd-networkd[697]: eth0: Gained carrier Dec 16 03:16:24.609369 systemd-networkd[697]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:16:24.619601 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 03:16:24.647953 kernel: AES CTR mode by8 optimization enabled Dec 16 03:16:24.645599 systemd-networkd[697]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 03:16:24.652335 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:16:24.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.668876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:16:24.669025 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:16:24.671132 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:16:24.683262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:16:24.780356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:16:24.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.891373 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:16:24.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.934512 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:16:24.943314 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 16 03:16:24.943340 kernel: audit: type=1130 audit(1765854984.933:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.943332 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:16:24.947220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:16:24.952513 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:16:24.958076 disk-uuid[832]: Primary Header is updated. Dec 16 03:16:24.958076 disk-uuid[832]: Secondary Entries is updated. Dec 16 03:16:24.958076 disk-uuid[832]: Secondary Header is updated. Dec 16 03:16:24.993038 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:16:25.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:25.016135 kernel: audit: type=1130 audit(1765854985.011:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:25.681923 systemd-networkd[697]: eth0: Gained IPv6LL Dec 16 03:16:26.025525 disk-uuid[841]: Warning: The kernel is still using the old partition table. Dec 16 03:16:26.025525 disk-uuid[841]: The new table will be used at the next reboot or after you Dec 16 03:16:26.025525 disk-uuid[841]: run partprobe(8) or kpartx(8) Dec 16 03:16:26.025525 disk-uuid[841]: The operation has completed successfully. Dec 16 03:16:26.038665 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:16:26.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.038875 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:16:26.051651 kernel: audit: type=1130 audit(1765854986.042:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.051708 kernel: audit: type=1131 audit(1765854986.042:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.043986 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:16:26.361162 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (858) Dec 16 03:16:26.365736 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:16:26.365796 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:16:26.370387 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:16:26.370422 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:16:26.382148 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:16:26.384284 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:16:26.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.387032 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:16:26.395861 kernel: audit: type=1130 audit(1765854986.384:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.650511 ignition[877]: Ignition 2.24.0 Dec 16 03:16:26.651997 ignition[877]: Stage: fetch-offline Dec 16 03:16:26.652124 ignition[877]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:16:26.652145 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:16:26.652281 ignition[877]: parsed url from cmdline: "" Dec 16 03:16:26.652290 ignition[877]: no config URL provided Dec 16 03:16:26.652385 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:16:26.652400 ignition[877]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:16:26.652466 ignition[877]: op(1): [started] loading QEMU firmware config module Dec 16 03:16:26.652478 ignition[877]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 03:16:26.667522 ignition[877]: op(1): [finished] loading QEMU firmware config module Dec 16 03:16:26.749130 ignition[877]: parsing config with SHA512: 00c8d11b8c6269b9dfbec0cde1ad2f725f99284985c37bed73bba3ec4fb9e79304c5503050dae253e426d4f4c42711c79ad644392eedbc8358ccd746e3d40c1a Dec 16 03:16:26.753802 unknown[877]: fetched base config from "system" Dec 16 03:16:26.753814 unknown[877]: fetched user config from "qemu" Dec 16 03:16:26.756591 ignition[877]: fetch-offline: fetch-offline passed Dec 16 03:16:26.757989 ignition[877]: Ignition finished successfully Dec 16 03:16:26.762139 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:16:26.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.763013 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 03:16:26.773715 kernel: audit: type=1130 audit(1765854986.762:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.763992 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:16:26.794511 ignition[887]: Ignition 2.24.0 Dec 16 03:16:26.794524 ignition[887]: Stage: kargs Dec 16 03:16:26.794677 ignition[887]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:16:26.794690 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:16:26.795472 ignition[887]: kargs: kargs passed Dec 16 03:16:26.795514 ignition[887]: Ignition finished successfully Dec 16 03:16:26.801691 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:16:26.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.803929 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:16:26.811288 kernel: audit: type=1130 audit(1765854986.802:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.840146 ignition[894]: Ignition 2.24.0 Dec 16 03:16:26.840162 ignition[894]: Stage: disks Dec 16 03:16:26.840348 ignition[894]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:16:26.840360 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:16:26.841428 ignition[894]: disks: disks passed Dec 16 03:16:26.841490 ignition[894]: Ignition finished successfully Dec 16 03:16:26.848975 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:16:26.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.852505 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:16:26.880910 kernel: audit: type=1130 audit(1765854986.852:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.880672 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:16:26.881430 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:16:26.887655 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:16:26.890664 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:16:26.899586 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:16:26.945850 systemd-fsck[903]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 03:16:27.547340 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:16:27.559173 kernel: audit: type=1130 audit(1765854987.549:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:27.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:27.550852 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:16:27.711164 kernel: EXT4-fs (vda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:16:27.712238 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:16:27.714558 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:16:27.718878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:16:27.722027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:16:27.724603 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 03:16:27.724639 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:16:27.724662 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:16:27.740302 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:16:27.760852 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:16:27.769709 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (911) Dec 16 03:16:27.769747 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:16:27.769777 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:16:27.774981 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:16:27.775047 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:16:27.776667 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:16:27.963407 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:16:27.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:27.969564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:16:27.971863 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:16:27.979448 kernel: audit: type=1130 audit(1765854987.963:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:27.996466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:16:28.022818 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:16:28.041368 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:16:28.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:28.058965 ignition[1010]: INFO : Ignition 2.24.0 Dec 16 03:16:28.058965 ignition[1010]: INFO : Stage: mount Dec 16 03:16:28.058965 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:16:28.058965 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:16:28.058965 ignition[1010]: INFO : mount: mount passed Dec 16 03:16:28.058965 ignition[1010]: INFO : Ignition finished successfully Dec 16 03:16:28.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:28.060874 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:16:28.065058 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:16:28.714028 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:16:28.742137 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1021) Dec 16 03:16:28.745458 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:16:28.745488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:16:28.749540 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:16:28.749587 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:16:28.751356 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:16:28.795008 ignition[1038]: INFO : Ignition 2.24.0 Dec 16 03:16:28.795008 ignition[1038]: INFO : Stage: files Dec 16 03:16:28.801534 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:16:28.801534 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:16:28.801534 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:16:28.808247 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:16:28.808247 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:16:28.808247 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:16:28.808247 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:16:28.808247 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:16:28.807557 unknown[1038]: wrote ssh authorized keys file for user: core Dec 16 03:16:28.821557 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:16:28.821557 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 03:16:28.852377 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:16:28.898526 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:16:28.898526 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:16:28.905549 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:16:28.905549 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:16:28.905549 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:16:28.905549 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:16:28.905549 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:16:28.905549 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:16:28.905549 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:16:28.926519 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:16:28.926519 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:16:28.926519 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:16:28.926519 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:16:28.926519 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:16:28.926519 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 03:16:29.216809 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:16:29.586664 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:16:29.586664 ignition[1038]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:16:29.593173 ignition[1038]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:16:29.593173 ignition[1038]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:16:29.593173 ignition[1038]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:16:29.593173 ignition[1038]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 03:16:29.593173 ignition[1038]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 03:16:29.593173 ignition[1038]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 03:16:29.593173 ignition[1038]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 03:16:29.593173 ignition[1038]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 03:16:29.618693 ignition[1038]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 03:16:29.622404 ignition[1038]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 03:16:29.625162 ignition[1038]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 03:16:29.625162 ignition[1038]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:16:29.629849 ignition[1038]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:16:29.629849 ignition[1038]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:16:29.629849 ignition[1038]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:16:29.629849 ignition[1038]: INFO : files: files passed Dec 16 03:16:29.629849 ignition[1038]: INFO : Ignition finished successfully Dec 16 03:16:29.634022 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:16:29.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:29.645381 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:16:29.648198 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:16:29.670656 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:16:29.670807 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:16:29.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:29.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:29.679611 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 03:16:29.685335 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:16:29.685335 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:16:29.691237 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:16:29.695658 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:16:29.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:29.696871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:16:29.701484 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:16:29.794647 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:16:29.794832 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:16:29.796181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:16:29.847717 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:16:29.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:29.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:29.849976 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:16:29.851744 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:16:29.939737 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:16:29.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:29.943287 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:16:29.952349 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 16 03:16:29.952374 kernel: audit: type=1130 audit(1765854989.940:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:29.977603 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:16:29.977861 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:16:29.979086 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:16:29.985331 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:16:29.986174 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:16:29.986321 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:16:30.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.015873 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:16:30.021215 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:16:30.028854 kernel: audit: type=1131 audit(1765854990.014:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.028029 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:16:30.029593 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:16:30.034772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:16:30.041269 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:16:30.042558 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:16:30.080163 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:16:30.085507 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:16:30.086630 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:16:30.087582 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:16:30.105089 kernel: audit: type=1131 audit(1765854990.095:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.094595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:16:30.094766 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:16:30.105259 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:16:30.109281 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:16:30.113618 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:16:30.113768 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:16:30.123589 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:16:30.123822 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:16:30.132506 kernel: audit: type=1131 audit(1765854990.127:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.132784 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:16:30.133022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:16:30.139317 kernel: audit: type=1131 audit(1765854990.133:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.134048 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:16:30.139767 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:16:30.140020 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:16:30.146573 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:16:30.147127 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:16:30.157349 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:16:30.157520 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:16:30.160682 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:16:30.160820 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:16:30.164248 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:16:30.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.164371 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:16:30.178738 kernel: audit: type=1131 audit(1765854990.167:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.178777 kernel: audit: type=1131 audit(1765854990.173:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.166414 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:16:30.166631 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:16:30.167831 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:16:30.167976 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:16:30.174365 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:16:30.180221 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:16:30.193962 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:16:30.194244 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:16:30.203808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:16:30.218167 kernel: audit: type=1131 audit(1765854990.203:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.218224 kernel: audit: type=1131 audit(1765854990.204:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.204007 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:16:30.205128 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:16:30.227999 kernel: audit: type=1131 audit(1765854990.218:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.205260 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:16:30.921134 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1061357082 wd_nsec: 1061356683 Dec 16 03:16:30.921671 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:16:30.921887 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:16:30.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.943366 ignition[1095]: INFO : Ignition 2.24.0 Dec 16 03:16:30.943366 ignition[1095]: INFO : Stage: umount Dec 16 03:16:30.946439 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:16:30.946439 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:16:30.946439 ignition[1095]: INFO : umount: umount passed Dec 16 03:16:30.946439 ignition[1095]: INFO : Ignition finished successfully Dec 16 03:16:30.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.948526 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:16:30.948714 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:16:30.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.949755 systemd[1]: Stopped target network.target - Network. Dec 16 03:16:30.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.953930 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:16:30.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.954066 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:16:30.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.958745 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:16:30.958823 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:16:30.961800 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:16:30.961867 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:16:30.964631 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:16:30.964699 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:16:30.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.967909 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:16:30.968766 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:16:30.987000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:16:30.977870 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:16:30.978077 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:16:30.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:30.988615 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:16:30.988810 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:16:30.997444 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:16:30.999988 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:16:31.001000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:16:31.000042 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:16:31.001736 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:16:31.006444 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:16:31.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.006548 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:16:31.007814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:16:31.007867 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:16:31.008569 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:16:31.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.008614 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:16:31.015638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:16:31.019003 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:16:31.056701 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:16:31.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.056965 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:16:31.060304 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:16:31.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.060374 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:16:31.061937 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:16:31.061976 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:16:31.062557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:16:31.062616 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:16:31.063577 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:16:31.063623 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:16:31.064582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:16:31.064631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:16:31.067057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:16:31.067531 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:16:31.067598 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:16:31.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.067934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:16:31.067982 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:16:31.068543 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 03:16:31.068591 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:16:31.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:31.068883 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:16:31.068929 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:16:31.069804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:16:31.069849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:16:31.080429 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:16:31.080578 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:16:31.089178 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:16:31.089356 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:16:31.109793 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:16:31.109970 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:16:31.117369 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:16:31.118925 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:16:31.119044 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:16:31.122152 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:16:31.146354 systemd[1]: Switching root. Dec 16 03:16:31.189022 systemd-journald[316]: Journal stopped Dec 16 03:16:32.924168 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Dec 16 03:16:32.924247 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:16:32.924273 kernel: SELinux: policy capability open_perms=1 Dec 16 03:16:32.924289 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:16:32.924305 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:16:32.924320 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:16:32.924340 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:16:32.924360 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:16:32.924375 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:16:32.924769 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:16:32.924788 systemd[1]: Successfully loaded SELinux policy in 70.639ms. Dec 16 03:16:32.924814 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.423ms. Dec 16 03:16:32.924831 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:16:32.924848 systemd[1]: Detected virtualization kvm. Dec 16 03:16:32.924865 systemd[1]: Detected architecture x86-64. Dec 16 03:16:32.924881 systemd[1]: Detected first boot. Dec 16 03:16:32.924908 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:16:32.924930 zram_generator::config[1139]: No configuration found. Dec 16 03:16:32.924953 kernel: Guest personality initialized and is inactive Dec 16 03:16:32.924968 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:16:32.924986 kernel: Initialized host personality Dec 16 03:16:32.925002 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:16:32.925017 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:16:32.925043 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:16:32.925061 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:16:32.925077 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:16:32.925124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:16:32.925142 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:16:32.925159 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:16:32.925176 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:16:32.925217 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:16:32.925235 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:16:32.925252 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:16:32.925267 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:16:32.925285 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:16:32.925301 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:16:32.925318 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:16:32.925344 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:16:32.925361 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:16:32.925378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:16:32.925394 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:16:32.925411 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:16:32.925427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:16:32.925454 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:16:32.925471 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:16:32.925488 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:16:32.925504 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:16:32.925521 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:16:32.925542 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:16:32.925559 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:16:32.925585 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:16:32.925604 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:16:32.925621 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:16:32.925651 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:16:32.925669 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:16:32.925685 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:16:32.925702 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:16:32.925731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:16:32.925749 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:16:32.925765 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:16:32.925781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:16:32.925798 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:16:32.925816 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:16:32.925832 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:16:32.925859 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:16:32.925877 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:16:32.925908 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:16:32.925928 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:16:32.925945 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:16:32.925961 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:16:32.925977 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:16:32.926006 systemd[1]: Reached target machines.target - Containers. Dec 16 03:16:32.926026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:16:32.926043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:16:32.926059 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:16:32.926081 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:16:32.926131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:16:32.926160 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:16:32.926202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:16:32.926231 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:16:32.926260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:16:32.926290 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:16:32.926320 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:16:32.926347 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:16:32.926369 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:16:32.926413 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:16:32.926446 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:16:32.926477 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:16:32.926520 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:16:32.926550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:16:32.926578 kernel: fuse: init (API version 7.41) Dec 16 03:16:32.926646 systemd-journald[1202]: Collecting audit messages is enabled. Dec 16 03:16:32.926695 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:16:32.926725 systemd-journald[1202]: Journal started Dec 16 03:16:32.926779 systemd-journald[1202]: Runtime Journal (/run/log/journal/51ee99e499f14e01a7ef9182a710c41c) is 6M, max 48.2M, 42.1M free. Dec 16 03:16:32.674000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:16:32.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.850000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:16:32.850000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:16:32.851000 audit: BPF prog-id=15 op=LOAD Dec 16 03:16:32.851000 audit: BPF prog-id=16 op=LOAD Dec 16 03:16:32.851000 audit: BPF prog-id=17 op=LOAD Dec 16 03:16:32.922000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:16:32.922000 audit[1202]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff46a591d0 a2=4000 a3=0 items=0 ppid=1 pid=1202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:32.922000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:16:32.520747 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:16:32.534426 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 03:16:32.534994 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:16:32.535460 systemd[1]: systemd-journald.service: Consumed 1.017s CPU time. Dec 16 03:16:32.933410 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:16:32.942116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:16:32.948678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:16:32.956614 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:16:32.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.960972 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:16:32.963066 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:16:32.965189 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:16:32.967356 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:16:32.970969 kernel: ACPI: bus type drm_connector registered Dec 16 03:16:32.971781 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:16:32.974048 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:16:32.978368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:16:32.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.980719 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:16:32.980937 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:16:32.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.983162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:16:32.983372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:16:32.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.985485 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:16:32.985707 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:16:32.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.987701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:16:32.987929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:16:32.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.990483 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:16:32.990771 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:16:32.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.993010 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:16:32.993347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:16:32.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.995808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:16:32.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:32.998676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:16:33.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.002207 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:16:33.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.004791 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:16:33.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.021993 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:16:33.026381 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:16:33.030183 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:16:33.033407 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:16:33.035485 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:16:33.035518 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:16:33.038296 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:16:33.073430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:16:33.073637 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:16:33.076429 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:16:33.080434 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:16:33.082772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:16:33.089280 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:16:33.091508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:16:33.094333 systemd-journald[1202]: Time spent on flushing to /var/log/journal/51ee99e499f14e01a7ef9182a710c41c is 31.461ms for 1107 entries. Dec 16 03:16:33.094333 systemd-journald[1202]: System Journal (/var/log/journal/51ee99e499f14e01a7ef9182a710c41c) is 8M, max 163.5M, 155.5M free. Dec 16 03:16:33.284187 systemd-journald[1202]: Received client request to flush runtime journal. Dec 16 03:16:33.284277 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:16:33.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.095360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:16:33.100415 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:16:33.106376 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:16:33.135446 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:16:33.138427 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:16:33.140438 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:16:33.230580 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:16:33.232824 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:16:33.237253 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:16:33.249909 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 16 03:16:33.249922 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 16 03:16:33.320507 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:16:33.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.325237 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:16:33.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.328996 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:16:33.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.331802 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:16:33.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.341049 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:16:33.342118 kernel: loop2: detected capacity change from 0 to 229808 Dec 16 03:16:33.392139 kernel: loop3: detected capacity change from 0 to 50784 Dec 16 03:16:33.404362 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:16:33.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.439124 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 03:16:33.455343 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:16:33.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.461000 audit: BPF prog-id=18 op=LOAD Dec 16 03:16:33.461000 audit: BPF prog-id=19 op=LOAD Dec 16 03:16:33.461000 audit: BPF prog-id=20 op=LOAD Dec 16 03:16:33.464484 kernel: loop5: detected capacity change from 0 to 229808 Dec 16 03:16:33.462355 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:16:33.466000 audit: BPF prog-id=21 op=LOAD Dec 16 03:16:33.467814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:16:33.472297 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:16:33.478000 audit: BPF prog-id=22 op=LOAD Dec 16 03:16:33.478000 audit: BPF prog-id=23 op=LOAD Dec 16 03:16:33.478000 audit: BPF prog-id=24 op=LOAD Dec 16 03:16:33.481532 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:16:33.485000 audit: BPF prog-id=25 op=LOAD Dec 16 03:16:33.485000 audit: BPF prog-id=26 op=LOAD Dec 16 03:16:33.485000 audit: BPF prog-id=27 op=LOAD Dec 16 03:16:33.487130 kernel: loop6: detected capacity change from 0 to 50784 Dec 16 03:16:33.487113 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:16:33.494563 (sd-merge)[1281]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 16 03:16:33.501204 (sd-merge)[1281]: Merged extensions into '/usr'. Dec 16 03:16:33.537101 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:16:33.569298 systemd-nsresourced[1286]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:16:33.575032 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:16:33.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.577477 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:16:33.577488 systemd[1]: Reloading... Dec 16 03:16:33.584076 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Dec 16 03:16:33.584150 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Dec 16 03:16:33.742339 zram_generator::config[1343]: No configuration found. Dec 16 03:16:33.783318 systemd-oomd[1283]: No swap; memory pressure usage will be degraded Dec 16 03:16:33.789750 systemd-resolved[1284]: Positive Trust Anchors: Dec 16 03:16:33.789769 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:16:33.789775 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:16:33.789807 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:16:33.794517 systemd-resolved[1284]: Defaulting to hostname 'linux'. Dec 16 03:16:33.953930 systemd[1]: Reloading finished in 376 ms. Dec 16 03:16:33.986939 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:16:33.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.989188 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:16:33.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.991400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:16:33.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.993521 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:16:33.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:33.996171 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:16:33.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.003712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:16:34.100943 systemd[1]: Starting ensure-sysext.service... Dec 16 03:16:34.103537 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:16:34.107000 audit: BPF prog-id=28 op=LOAD Dec 16 03:16:34.107000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:16:34.107000 audit: BPF prog-id=29 op=LOAD Dec 16 03:16:34.107000 audit: BPF prog-id=30 op=LOAD Dec 16 03:16:34.107000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:16:34.107000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:16:34.109000 audit: BPF prog-id=31 op=LOAD Dec 16 03:16:34.109000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:16:34.112000 audit: BPF prog-id=32 op=LOAD Dec 16 03:16:34.112000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:16:34.112000 audit: BPF prog-id=33 op=LOAD Dec 16 03:16:34.112000 audit: BPF prog-id=34 op=LOAD Dec 16 03:16:34.112000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:16:34.112000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:16:34.113000 audit: BPF prog-id=35 op=LOAD Dec 16 03:16:34.113000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:16:34.113000 audit: BPF prog-id=36 op=LOAD Dec 16 03:16:34.113000 audit: BPF prog-id=37 op=LOAD Dec 16 03:16:34.113000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:16:34.113000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:16:34.114000 audit: BPF prog-id=38 op=LOAD Dec 16 03:16:34.114000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:16:34.114000 audit: BPF prog-id=39 op=LOAD Dec 16 03:16:34.114000 audit: BPF prog-id=40 op=LOAD Dec 16 03:16:34.114000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:16:34.114000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:16:34.120863 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:16:34.120881 systemd[1]: Reloading... Dec 16 03:16:34.124299 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:16:34.124679 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:16:34.124948 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:16:34.126248 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Dec 16 03:16:34.126324 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Dec 16 03:16:34.133675 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:16:34.133692 systemd-tmpfiles[1369]: Skipping /boot Dec 16 03:16:34.147046 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:16:34.147062 systemd-tmpfiles[1369]: Skipping /boot Dec 16 03:16:34.198921 zram_generator::config[1401]: No configuration found. Dec 16 03:16:34.419697 systemd[1]: Reloading finished in 298 ms. Dec 16 03:16:34.446241 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:16:34.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.450000 audit: BPF prog-id=41 op=LOAD Dec 16 03:16:34.450000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:16:34.450000 audit: BPF prog-id=42 op=LOAD Dec 16 03:16:34.450000 audit: BPF prog-id=43 op=LOAD Dec 16 03:16:34.450000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:16:34.450000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:16:34.451000 audit: BPF prog-id=44 op=LOAD Dec 16 03:16:34.451000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:16:34.452000 audit: BPF prog-id=45 op=LOAD Dec 16 03:16:34.453000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:16:34.453000 audit: BPF prog-id=46 op=LOAD Dec 16 03:16:34.453000 audit: BPF prog-id=47 op=LOAD Dec 16 03:16:34.453000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:16:34.453000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:16:34.454000 audit: BPF prog-id=48 op=LOAD Dec 16 03:16:34.454000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:16:34.454000 audit: BPF prog-id=49 op=LOAD Dec 16 03:16:34.454000 audit: BPF prog-id=50 op=LOAD Dec 16 03:16:34.454000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:16:34.454000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:16:34.461000 audit: BPF prog-id=51 op=LOAD Dec 16 03:16:34.461000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:16:34.461000 audit: BPF prog-id=52 op=LOAD Dec 16 03:16:34.461000 audit: BPF prog-id=53 op=LOAD Dec 16 03:16:34.461000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:16:34.461000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:16:34.466119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:16:34.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.477505 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:16:34.481419 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:16:34.484991 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:16:34.491994 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:16:34.494000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:16:34.494000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:16:34.495000 audit: BPF prog-id=54 op=LOAD Dec 16 03:16:34.495000 audit: BPF prog-id=55 op=LOAD Dec 16 03:16:34.498325 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:16:34.502990 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:16:34.509266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:16:34.511154 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:16:34.516472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:16:34.518000 audit[1451]: SYSTEM_BOOT pid=1451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.521353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:16:34.523318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:16:34.523511 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:16:34.523624 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:16:34.524981 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:16:34.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.540031 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:16:34.540265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:16:34.551545 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:16:34.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.557016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:16:34.557442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:16:34.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.560767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:16:34.561763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:16:34.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:34.571981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:16:34.573075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:16:34.576394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:16:34.579765 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:16:34.583488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:16:34.583750 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:16:34.583862 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:16:34.583963 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:16:34.584000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:16:34.584000 audit[1476]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdef3a3610 a2=420 a3=0 items=0 ppid=1440 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.584000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:16:34.585005 augenrules[1476]: No rules Dec 16 03:16:34.585234 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:16:34.585510 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:16:34.588243 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:16:34.592682 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:16:34.595840 systemd-udevd[1445]: Using default interface naming scheme 'v257'. Dec 16 03:16:34.596763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:16:34.597518 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:16:34.601271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:16:34.601646 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:16:34.612123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:16:34.617553 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:16:34.619318 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:16:34.622302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:16:34.624349 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:16:34.630323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:16:34.633592 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:16:34.635522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:16:34.635731 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:16:34.635837 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:16:34.635990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:16:34.637521 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:16:34.640254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:16:34.640693 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:16:34.649208 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:16:34.650234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:16:34.653149 systemd[1]: Finished ensure-sysext.service. Dec 16 03:16:34.655327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:16:34.655664 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:16:34.658341 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:16:34.662811 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:16:34.663732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:16:34.676382 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:16:34.678221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:16:34.678319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:16:34.680409 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 03:16:34.682421 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:16:34.694608 augenrules[1485]: /sbin/augenrules: No change Dec 16 03:16:34.754000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:16:34.754000 audit[1535]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd29d3a1e0 a2=420 a3=0 items=0 ppid=1485 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.754000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:16:34.754000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:16:34.754000 audit[1535]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd29d3c670 a2=420 a3=0 items=0 ppid=1485 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.754000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:16:34.755020 augenrules[1535]: No rules Dec 16 03:16:34.764968 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:16:34.765294 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:16:34.775363 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:16:34.838017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:16:34.848667 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:16:35.036681 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:16:35.035335 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:16:35.081128 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 03:16:35.100881 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 03:16:35.104512 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:16:35.106866 systemd-networkd[1519]: lo: Link UP Dec 16 03:16:35.106877 systemd-networkd[1519]: lo: Gained carrier Dec 16 03:16:35.111111 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:16:35.114950 systemd-networkd[1519]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:16:35.114965 systemd-networkd[1519]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:16:35.115680 systemd-networkd[1519]: eth0: Link UP Dec 16 03:16:35.116132 systemd-networkd[1519]: eth0: Gained carrier Dec 16 03:16:35.116150 systemd-networkd[1519]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:16:35.171539 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 03:16:35.171948 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 03:16:35.168158 systemd[1]: Reached target network.target - Network. Dec 16 03:16:35.174689 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:16:35.179327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:16:35.182128 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:16:35.184245 systemd-networkd[1519]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 03:16:35.185824 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Dec 16 03:16:35.678452 systemd-resolved[1284]: Clock change detected. Flushing caches. Dec 16 03:16:35.678538 systemd-timesyncd[1523]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 03:16:35.678579 systemd-timesyncd[1523]: Initial clock synchronization to Tue 2025-12-16 03:16:35.678400 UTC. Dec 16 03:16:35.710436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:16:35.721195 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:16:35.936302 kernel: kvm_amd: TSC scaling supported Dec 16 03:16:35.936440 kernel: kvm_amd: Nested Virtualization enabled Dec 16 03:16:35.936465 kernel: kvm_amd: Nested Paging enabled Dec 16 03:16:35.936486 kernel: kvm_amd: LBR virtualization supported Dec 16 03:16:35.936504 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 03:16:35.936525 kernel: kvm_amd: Virtual GIF supported Dec 16 03:16:35.974122 ldconfig[1442]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:16:35.985264 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:16:35.988466 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:16:35.996333 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:16:36.016987 kernel: EDAC MC: Ver: 3.0.0 Dec 16 03:16:36.029507 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:16:36.032109 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:16:36.034386 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:16:36.037139 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:16:36.039657 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:16:36.042386 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:16:36.044578 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:16:36.047201 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:16:36.049707 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:16:36.051848 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:16:36.054324 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:16:36.054367 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:16:36.056210 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:16:36.059884 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:16:36.064434 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:16:36.069150 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:16:36.071857 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:16:36.074149 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:16:36.080588 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:16:36.082969 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:16:36.085919 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:16:36.088853 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:16:36.090605 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:16:36.092344 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:16:36.092377 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:16:36.093738 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:16:36.097341 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:16:36.100508 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:16:36.104476 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:16:36.117402 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:16:36.119884 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:16:36.121775 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:16:36.125281 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:16:36.128848 jq[1589]: false Dec 16 03:16:36.129422 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:16:36.134296 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:16:36.142253 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:16:36.151031 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing passwd entry cache Dec 16 03:16:36.150266 oslogin_cache_refresh[1591]: Refreshing passwd entry cache Dec 16 03:16:36.159925 extend-filesystems[1590]: Found /dev/vda6 Dec 16 03:16:36.164475 oslogin_cache_refresh[1591]: Failure getting users, quitting Dec 16 03:16:36.164641 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting users, quitting Dec 16 03:16:36.164641 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:16:36.164641 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing group entry cache Dec 16 03:16:36.164501 oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:16:36.164553 oslogin_cache_refresh[1591]: Refreshing group entry cache Dec 16 03:16:36.166983 extend-filesystems[1590]: Found /dev/vda9 Dec 16 03:16:36.167496 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:16:36.171984 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:16:36.172558 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:16:36.173369 extend-filesystems[1590]: Checking size of /dev/vda9 Dec 16 03:16:36.177568 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting groups, quitting Dec 16 03:16:36.177568 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:16:36.175075 oslogin_cache_refresh[1591]: Failure getting groups, quitting Dec 16 03:16:36.176234 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:16:36.175086 oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:16:36.179160 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:16:36.190060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:16:36.196495 jq[1608]: true Dec 16 03:16:36.193858 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:16:36.194208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:16:36.194582 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:16:36.194876 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:16:36.199602 extend-filesystems[1590]: Resized partition /dev/vda9 Dec 16 03:16:36.198222 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:16:36.198535 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:16:36.201033 extend-filesystems[1619]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:16:36.213732 update_engine[1602]: I20251216 03:16:36.212792 1602 main.cc:92] Flatcar Update Engine starting Dec 16 03:16:36.219432 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 16 03:16:36.205520 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:16:36.205866 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:16:36.285078 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 16 03:16:36.332085 tar[1618]: linux-amd64/LICENSE Dec 16 03:16:36.332808 jq[1620]: true Dec 16 03:16:36.335632 tar[1618]: linux-amd64/helm Dec 16 03:16:36.337127 extend-filesystems[1619]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 03:16:36.337127 extend-filesystems[1619]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 03:16:36.337127 extend-filesystems[1619]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 16 03:16:36.344395 extend-filesystems[1590]: Resized filesystem in /dev/vda9 Dec 16 03:16:36.343813 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:16:36.344389 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:16:36.364166 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:16:36.365467 dbus-daemon[1587]: [system] SELinux support is enabled Dec 16 03:16:36.366141 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:16:36.380030 update_engine[1602]: I20251216 03:16:36.371799 1602 update_check_scheduler.cc:74] Next update check in 10m33s Dec 16 03:16:36.376969 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:16:36.387615 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 03:16:36.390203 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:16:36.390235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:16:36.392825 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:16:36.392859 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:16:36.395638 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:16:36.403008 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:16:36.425666 systemd-logind[1600]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 03:16:36.425691 systemd-logind[1600]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:16:36.426206 systemd-logind[1600]: New seat seat0. Dec 16 03:16:36.489434 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:16:36.535122 locksmithd[1658]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:16:36.759055 sshd_keygen[1617]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:16:36.794679 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:16:36.798633 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:16:36.814454 containerd[1622]: time="2025-12-16T03:16:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:16:36.815185 containerd[1622]: time="2025-12-16T03:16:36.815153858Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:16:36.818487 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:16:36.819178 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:16:36.824866 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:16:36.886081 containerd[1622]: time="2025-12-16T03:16:36.885998928Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="23.124µs" Dec 16 03:16:36.886081 containerd[1622]: time="2025-12-16T03:16:36.886059582Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:16:36.886216 containerd[1622]: time="2025-12-16T03:16:36.886119244Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:16:36.886216 containerd[1622]: time="2025-12-16T03:16:36.886131567Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:16:36.886403 containerd[1622]: time="2025-12-16T03:16:36.886365786Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:16:36.886403 containerd[1622]: time="2025-12-16T03:16:36.886388629Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:16:36.886498 containerd[1622]: time="2025-12-16T03:16:36.886479169Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:16:36.886498 containerd[1622]: time="2025-12-16T03:16:36.886493676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:16:36.886872 containerd[1622]: time="2025-12-16T03:16:36.886835397Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:16:36.886872 containerd[1622]: time="2025-12-16T03:16:36.886855254Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:16:36.886872 containerd[1622]: time="2025-12-16T03:16:36.886868108Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:16:36.886872 containerd[1622]: time="2025-12-16T03:16:36.886877205Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:16:36.888137 containerd[1622]: time="2025-12-16T03:16:36.887884765Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:16:36.888137 containerd[1622]: time="2025-12-16T03:16:36.887928787Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:16:36.888137 containerd[1622]: time="2025-12-16T03:16:36.888091262Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:16:36.888417 containerd[1622]: time="2025-12-16T03:16:36.888323678Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:16:36.888417 containerd[1622]: time="2025-12-16T03:16:36.888362641Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:16:36.888417 containerd[1622]: time="2025-12-16T03:16:36.888371758Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:16:36.888489 containerd[1622]: time="2025-12-16T03:16:36.888422964Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:16:36.890057 containerd[1622]: time="2025-12-16T03:16:36.888748575Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:16:36.890057 containerd[1622]: time="2025-12-16T03:16:36.888825990Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:16:36.898763 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:16:36.905205 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:16:36.906076 containerd[1622]: time="2025-12-16T03:16:36.906029923Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906118920Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906237833Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906255296Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906271637Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906284671Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906295842Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906307113Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906318194Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906336037Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906354382Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906365503Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906374389Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:16:36.906434 containerd[1622]: time="2025-12-16T03:16:36.906386652Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906548426Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906573633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906586597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906598660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906610542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906619559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906629638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906641881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906653613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906664403Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906673921Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906700481Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:16:36.906774 containerd[1622]: time="2025-12-16T03:16:36.906783186Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:16:36.907397 containerd[1622]: time="2025-12-16T03:16:36.906800709Z" level=info msg="Start snapshots syncer" Dec 16 03:16:36.907397 containerd[1622]: time="2025-12-16T03:16:36.906836115Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:16:36.907397 containerd[1622]: time="2025-12-16T03:16:36.907198976Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907245904Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907294825Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907401135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907419479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907440198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907450507Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907465796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907475554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907487537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907499018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907510921Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907540586Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907552348Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:16:36.907611 containerd[1622]: time="2025-12-16T03:16:36.907560143Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907569871Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907577746Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907586823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907599026Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907620586Z" level=info msg="runtime interface created" Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907626297Z" level=info msg="created NRI interface" Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907638881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907652406Z" level=info msg="Connect containerd service" Dec 16 03:16:36.907940 containerd[1622]: time="2025-12-16T03:16:36.907677142Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:16:36.909905 containerd[1622]: time="2025-12-16T03:16:36.908417000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:16:36.910384 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:16:36.912642 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:16:37.112684 tar[1618]: linux-amd64/README.md Dec 16 03:16:37.132688 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:16:37.140700 containerd[1622]: time="2025-12-16T03:16:37.140645992Z" level=info msg="Start subscribing containerd event" Dec 16 03:16:37.140884 containerd[1622]: time="2025-12-16T03:16:37.140739077Z" level=info msg="Start recovering state" Dec 16 03:16:37.141079 containerd[1622]: time="2025-12-16T03:16:37.141046333Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:16:37.141161 containerd[1622]: time="2025-12-16T03:16:37.141094223Z" level=info msg="Start event monitor" Dec 16 03:16:37.141161 containerd[1622]: time="2025-12-16T03:16:37.141118929Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:16:37.141161 containerd[1622]: time="2025-12-16T03:16:37.141144196Z" level=info msg="Start streaming server" Dec 16 03:16:37.141505 containerd[1622]: time="2025-12-16T03:16:37.141147493Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:16:37.141505 containerd[1622]: time="2025-12-16T03:16:37.141213887Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:16:37.141505 containerd[1622]: time="2025-12-16T03:16:37.141241168Z" level=info msg="runtime interface starting up..." Dec 16 03:16:37.141505 containerd[1622]: time="2025-12-16T03:16:37.141257739Z" level=info msg="starting plugins..." Dec 16 03:16:37.141505 containerd[1622]: time="2025-12-16T03:16:37.141289669Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:16:37.142157 containerd[1622]: time="2025-12-16T03:16:37.142110819Z" level=info msg="containerd successfully booted in 0.328342s" Dec 16 03:16:37.142443 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:16:37.181189 systemd-networkd[1519]: eth0: Gained IPv6LL Dec 16 03:16:37.184493 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:16:37.187304 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:16:37.190651 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 03:16:37.193764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:16:37.196598 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:16:37.226408 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:16:37.228942 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 03:16:37.229356 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 03:16:37.233806 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:16:38.273662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:38.276096 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:16:38.278252 systemd[1]: Startup finished in 3.161s (kernel) + 8.834s (initrd) + 6.006s (userspace) = 18.002s. Dec 16 03:16:38.289336 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:16:38.717684 kubelet[1727]: E1216 03:16:38.717620 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:16:38.721539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:16:38.721726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:16:38.722124 systemd[1]: kubelet.service: Consumed 1.329s CPU time, 268.2M memory peak. Dec 16 03:16:45.811834 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:16:45.815006 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:54642.service - OpenSSH per-connection server daemon (10.0.0.1:54642). Dec 16 03:16:45.914673 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 54642 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:16:45.917278 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:45.925671 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:16:45.927040 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:16:45.932140 systemd-logind[1600]: New session 1 of user core. Dec 16 03:16:45.959631 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:16:45.963002 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:16:45.987299 (systemd)[1747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:45.990658 systemd-logind[1600]: New session 2 of user core. Dec 16 03:16:46.162702 systemd[1747]: Queued start job for default target default.target. Dec 16 03:16:46.183505 systemd[1747]: Created slice app.slice - User Application Slice. Dec 16 03:16:46.183539 systemd[1747]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:16:46.183557 systemd[1747]: Reached target paths.target - Paths. Dec 16 03:16:46.183610 systemd[1747]: Reached target timers.target - Timers. Dec 16 03:16:46.185594 systemd[1747]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:16:46.186809 systemd[1747]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:16:46.201065 systemd[1747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:16:46.201248 systemd[1747]: Reached target sockets.target - Sockets. Dec 16 03:16:46.203762 systemd[1747]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:16:46.203909 systemd[1747]: Reached target basic.target - Basic System. Dec 16 03:16:46.204017 systemd[1747]: Reached target default.target - Main User Target. Dec 16 03:16:46.204060 systemd[1747]: Startup finished in 202ms. Dec 16 03:16:46.204264 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:16:46.217174 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:16:46.236668 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:54656.service - OpenSSH per-connection server daemon (10.0.0.1:54656). Dec 16 03:16:46.301510 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 54656 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:16:46.303665 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:46.310333 systemd-logind[1600]: New session 3 of user core. Dec 16 03:16:46.324258 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:16:46.339928 sshd[1765]: Connection closed by 10.0.0.1 port 54656 Dec 16 03:16:46.340336 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:46.357984 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:54656.service: Deactivated successfully. Dec 16 03:16:46.359934 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:16:46.360818 systemd-logind[1600]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:16:46.363620 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:54662.service - OpenSSH per-connection server daemon (10.0.0.1:54662). Dec 16 03:16:46.364436 systemd-logind[1600]: Removed session 3. Dec 16 03:16:46.429266 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 54662 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:16:46.431342 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:46.436737 systemd-logind[1600]: New session 4 of user core. Dec 16 03:16:46.446260 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:16:46.457126 sshd[1775]: Connection closed by 10.0.0.1 port 54662 Dec 16 03:16:46.457520 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:46.470844 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:54662.service: Deactivated successfully. Dec 16 03:16:46.472813 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:16:46.473602 systemd-logind[1600]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:16:46.476522 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:54664.service - OpenSSH per-connection server daemon (10.0.0.1:54664). Dec 16 03:16:46.477386 systemd-logind[1600]: Removed session 4. Dec 16 03:16:46.540571 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 54664 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:16:46.542532 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:46.547962 systemd-logind[1600]: New session 5 of user core. Dec 16 03:16:46.570431 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:16:46.588323 sshd[1785]: Connection closed by 10.0.0.1 port 54664 Dec 16 03:16:46.588659 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:46.608084 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:54664.service: Deactivated successfully. Dec 16 03:16:46.610856 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:16:46.611972 systemd-logind[1600]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:16:46.615256 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:54678.service - OpenSSH per-connection server daemon (10.0.0.1:54678). Dec 16 03:16:46.616697 systemd-logind[1600]: Removed session 5. Dec 16 03:16:46.684213 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 54678 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:16:46.686013 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:46.691193 systemd-logind[1600]: New session 6 of user core. Dec 16 03:16:46.704303 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:16:46.733343 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:16:46.733747 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:16:46.748414 sudo[1796]: pam_unix(sudo:session): session closed for user root Dec 16 03:16:46.750574 sshd[1795]: Connection closed by 10.0.0.1 port 54678 Dec 16 03:16:46.751009 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:46.770236 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:54678.service: Deactivated successfully. Dec 16 03:16:46.772304 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:16:46.773361 systemd-logind[1600]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:16:46.776665 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:54694.service - OpenSSH per-connection server daemon (10.0.0.1:54694). Dec 16 03:16:46.777493 systemd-logind[1600]: Removed session 6. Dec 16 03:16:46.843305 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 54694 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:16:46.845995 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:46.851362 systemd-logind[1600]: New session 7 of user core. Dec 16 03:16:46.862138 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:16:46.884586 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:16:46.885285 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:16:47.128825 sudo[1810]: pam_unix(sudo:session): session closed for user root Dec 16 03:16:47.138347 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:16:47.138704 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:16:47.149659 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:16:47.208000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:16:47.209471 augenrules[1834]: No rules Dec 16 03:16:47.217100 kernel: kauditd_printk_skb: 171 callbacks suppressed Dec 16 03:16:47.217199 kernel: audit: type=1305 audit(1765855007.208:216): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:16:47.218772 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:16:47.219276 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:16:47.208000 audit[1834]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe410e8f00 a2=420 a3=0 items=0 ppid=1815 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:47.220893 sudo[1809]: pam_unix(sudo:session): session closed for user root Dec 16 03:16:47.223061 sshd[1808]: Connection closed by 10.0.0.1 port 54694 Dec 16 03:16:47.223599 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:47.225411 kernel: audit: type=1300 audit(1765855007.208:216): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe410e8f00 a2=420 a3=0 items=0 ppid=1815 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:47.225512 kernel: audit: type=1327 audit(1765855007.208:216): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:16:47.208000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:16:47.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.232004 kernel: audit: type=1130 audit(1765855007.216:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.232084 kernel: audit: type=1131 audit(1765855007.219:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.219000 audit[1809]: USER_END pid=1809 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.240714 kernel: audit: type=1106 audit(1765855007.219:219): pid=1809 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.240885 kernel: audit: type=1104 audit(1765855007.219:220): pid=1809 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.219000 audit[1809]: CRED_DISP pid=1809 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.245092 kernel: audit: type=1106 audit(1765855007.224:221): pid=1803 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:47.224000 audit[1803]: USER_END pid=1803 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:47.224000 audit[1803]: CRED_DISP pid=1803 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:47.264654 kernel: audit: type=1104 audit(1765855007.224:222): pid=1803 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:47.271254 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:54694.service: Deactivated successfully. Dec 16 03:16:47.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.59:22-10.0.0.1:54694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.273814 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:16:47.274831 systemd-logind[1600]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:16:47.276994 kernel: audit: type=1131 audit(1765855007.270:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.59:22-10.0.0.1:54694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.278594 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:54698.service - OpenSSH per-connection server daemon (10.0.0.1:54698). Dec 16 03:16:47.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:54698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.279426 systemd-logind[1600]: Removed session 7. Dec 16 03:16:47.346000 audit[1843]: USER_ACCT pid=1843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:47.347847 sshd[1843]: Accepted publickey for core from 10.0.0.1 port 54698 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:16:47.348000 audit[1843]: CRED_ACQ pid=1843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:47.348000 audit[1843]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb39c70b0 a2=3 a3=0 items=0 ppid=1 pid=1843 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:47.348000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:47.350048 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:47.356326 systemd-logind[1600]: New session 8 of user core. Dec 16 03:16:47.366208 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:16:47.368000 audit[1843]: USER_START pid=1843 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:47.370000 audit[1847]: CRED_ACQ pid=1847 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:47.382000 audit[1848]: USER_ACCT pid=1848 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.384076 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:16:47.383000 audit[1848]: CRED_REFR pid=1848 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.383000 audit[1848]: USER_START pid=1848 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:47.384490 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:16:48.344570 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:16:48.409543 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:16:48.972361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:16:48.974327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:16:49.304391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:49.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:49.329530 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:16:49.411519 dockerd[1869]: time="2025-12-16T03:16:49.411448637Z" level=info msg="Starting up" Dec 16 03:16:49.413196 dockerd[1869]: time="2025-12-16T03:16:49.413102980Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:16:49.485343 dockerd[1869]: time="2025-12-16T03:16:49.485271902Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:16:49.487932 kubelet[1883]: E1216 03:16:49.487865 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:16:49.496284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:16:49.496518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:16:49.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:16:49.497264 systemd[1]: kubelet.service: Consumed 484ms CPU time, 116.3M memory peak. Dec 16 03:16:50.860882 dockerd[1869]: time="2025-12-16T03:16:50.860812492Z" level=info msg="Loading containers: start." Dec 16 03:16:50.957483 kernel: Initializing XFRM netlink socket Dec 16 03:16:51.041000 audit[1939]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.041000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffec4cfaeb0 a2=0 a3=0 items=0 ppid=1869 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:16:51.044000 audit[1941]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.044000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff16632f30 a2=0 a3=0 items=0 ppid=1869 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.044000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:16:51.046000 audit[1943]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.046000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff467fedb0 a2=0 a3=0 items=0 ppid=1869 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:16:51.051000 audit[1945]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.051000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe33b5e960 a2=0 a3=0 items=0 ppid=1869 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:16:51.054000 audit[1947]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.054000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff352893a0 a2=0 a3=0 items=0 ppid=1869 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:16:51.057000 audit[1949]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.057000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcc5e6f950 a2=0 a3=0 items=0 ppid=1869 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.057000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:16:51.060000 audit[1951]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.060000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeb8bcbd70 a2=0 a3=0 items=0 ppid=1869 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.060000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:16:51.063000 audit[1953]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.063000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd844ac4e0 a2=0 a3=0 items=0 ppid=1869 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.063000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:16:51.107000 audit[1956]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.107000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffcb811f5d0 a2=0 a3=0 items=0 ppid=1869 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.107000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:16:51.111000 audit[1958]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.111000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd33810600 a2=0 a3=0 items=0 ppid=1869 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.111000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:16:51.114000 audit[1960]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.114000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe7d0ef340 a2=0 a3=0 items=0 ppid=1869 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:16:51.118000 audit[1962]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.118000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffaa6748a0 a2=0 a3=0 items=0 ppid=1869 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:16:51.122000 audit[1964]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.122000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd0ae08b90 a2=0 a3=0 items=0 ppid=1869 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.122000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:16:51.197000 audit[1994]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.197000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff1230ba90 a2=0 a3=0 items=0 ppid=1869 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:16:51.200000 audit[1996]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.200000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff24c891e0 a2=0 a3=0 items=0 ppid=1869 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.200000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:16:51.204000 audit[1998]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.204000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1a9fff20 a2=0 a3=0 items=0 ppid=1869 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:16:51.208000 audit[2000]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.208000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcca858fe0 a2=0 a3=0 items=0 ppid=1869 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:16:51.212000 audit[2002]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.212000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd89a85d30 a2=0 a3=0 items=0 ppid=1869 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:16:51.216000 audit[2004]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.216000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcf982adf0 a2=0 a3=0 items=0 ppid=1869 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:16:51.219000 audit[2006]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.219000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc74d0cb70 a2=0 a3=0 items=0 ppid=1869 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.219000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:16:51.222000 audit[2008]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.222000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffd9680dd0 a2=0 a3=0 items=0 ppid=1869 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.222000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:16:51.226000 audit[2010]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.226000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fffd454fd20 a2=0 a3=0 items=0 ppid=1869 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:16:51.231000 audit[2012]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.231000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff907d9090 a2=0 a3=0 items=0 ppid=1869 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:16:51.235000 audit[2014]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.235000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe43e14690 a2=0 a3=0 items=0 ppid=1869 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.235000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:16:51.238000 audit[2016]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.238000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe5cc57140 a2=0 a3=0 items=0 ppid=1869 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:16:51.242000 audit[2018]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.242000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffceff053b0 a2=0 a3=0 items=0 ppid=1869 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:16:51.250000 audit[2023]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.250000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd593997b0 a2=0 a3=0 items=0 ppid=1869 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:16:51.253000 audit[2025]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.253000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd84123ff0 a2=0 a3=0 items=0 ppid=1869 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.253000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:16:51.259000 audit[2027]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.259000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdec106e90 a2=0 a3=0 items=0 ppid=1869 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:16:51.262000 audit[2029]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.262000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb4134e50 a2=0 a3=0 items=0 ppid=1869 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:16:51.266000 audit[2031]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.266000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff281ffe60 a2=0 a3=0 items=0 ppid=1869 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:16:51.269000 audit[2033]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:51.269000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff25db96e0 a2=0 a3=0 items=0 ppid=1869 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.269000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:16:51.477000 audit[2037]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.477000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffabeb2ad0 a2=0 a3=0 items=0 ppid=1869 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:16:51.483000 audit[2039]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.483000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc6a38acc0 a2=0 a3=0 items=0 ppid=1869 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:16:51.500000 audit[2047]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.500000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffed6f8b010 a2=0 a3=0 items=0 ppid=1869 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.500000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:16:51.512000 audit[2053]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.512000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc20e0a2b0 a2=0 a3=0 items=0 ppid=1869 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.512000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:16:51.516000 audit[2055]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.516000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffdaded2f40 a2=0 a3=0 items=0 ppid=1869 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:16:51.519000 audit[2057]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.519000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc7fd61100 a2=0 a3=0 items=0 ppid=1869 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.519000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:16:51.523000 audit[2059]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.523000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffb3de4990 a2=0 a3=0 items=0 ppid=1869 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.523000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:16:51.526000 audit[2061]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:51.526000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdd1e5c540 a2=0 a3=0 items=0 ppid=1869 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:51.526000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:16:51.527818 systemd-networkd[1519]: docker0: Link UP Dec 16 03:16:51.567095 dockerd[1869]: time="2025-12-16T03:16:51.565988999Z" level=info msg="Loading containers: done." Dec 16 03:16:51.710565 dockerd[1869]: time="2025-12-16T03:16:51.710303010Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:16:51.710825 dockerd[1869]: time="2025-12-16T03:16:51.710678835Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:16:51.710946 dockerd[1869]: time="2025-12-16T03:16:51.710916210Z" level=info msg="Initializing buildkit" Dec 16 03:16:51.955494 dockerd[1869]: time="2025-12-16T03:16:51.955404062Z" level=info msg="Completed buildkit initialization" Dec 16 03:16:51.966334 dockerd[1869]: time="2025-12-16T03:16:51.966120148Z" level=info msg="Daemon has completed initialization" Dec 16 03:16:51.966334 dockerd[1869]: time="2025-12-16T03:16:51.966258007Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:16:51.967282 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:16:51.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:53.057801 containerd[1622]: time="2025-12-16T03:16:53.057676065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 03:16:54.907500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222648743.mount: Deactivated successfully. Dec 16 03:16:59.490412 containerd[1622]: time="2025-12-16T03:16:59.489899310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:59.501853 containerd[1622]: time="2025-12-16T03:16:59.494397806Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=29095659" Dec 16 03:16:59.501853 containerd[1622]: time="2025-12-16T03:16:59.496507733Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:59.510216 containerd[1622]: time="2025-12-16T03:16:59.508750252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:59.515071 containerd[1622]: time="2025-12-16T03:16:59.514942074Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 6.457120165s" Dec 16 03:16:59.515071 containerd[1622]: time="2025-12-16T03:16:59.515062500Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 03:16:59.519545 containerd[1622]: time="2025-12-16T03:16:59.519257296Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 03:16:59.522134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:16:59.524416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:17:00.065165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:17:00.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:00.075841 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 03:17:00.076029 kernel: audit: type=1130 audit(1765855020.064:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:00.106530 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:17:00.413219 kubelet[2172]: E1216 03:17:00.412580 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:17:00.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:17:00.423836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:17:00.424118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:17:00.426888 systemd[1]: kubelet.service: Consumed 396ms CPU time, 110.9M memory peak. Dec 16 03:17:00.434945 kernel: audit: type=1131 audit(1765855020.423:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:17:04.479438 containerd[1622]: time="2025-12-16T03:17:04.479331596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:04.481437 containerd[1622]: time="2025-12-16T03:17:04.481374277Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26009989" Dec 16 03:17:04.483128 containerd[1622]: time="2025-12-16T03:17:04.483076479Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:04.489119 containerd[1622]: time="2025-12-16T03:17:04.489058097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:04.491610 containerd[1622]: time="2025-12-16T03:17:04.490155625Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 4.970847664s" Dec 16 03:17:04.491610 containerd[1622]: time="2025-12-16T03:17:04.490197003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 03:17:04.492069 containerd[1622]: time="2025-12-16T03:17:04.492026243Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 03:17:06.374860 containerd[1622]: time="2025-12-16T03:17:06.374774959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:06.376540 containerd[1622]: time="2025-12-16T03:17:06.376432758Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 03:17:06.378155 containerd[1622]: time="2025-12-16T03:17:06.378088273Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:06.381600 containerd[1622]: time="2025-12-16T03:17:06.381539685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:06.382939 containerd[1622]: time="2025-12-16T03:17:06.382890038Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.890822587s" Dec 16 03:17:06.382939 containerd[1622]: time="2025-12-16T03:17:06.382935653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 03:17:06.383852 containerd[1622]: time="2025-12-16T03:17:06.383815423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 03:17:07.972651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448847289.mount: Deactivated successfully. Dec 16 03:17:10.526129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 03:17:10.541506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:17:11.591976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:17:11.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:11.607513 kernel: audit: type=1130 audit(1765855031.591:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:11.616214 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:17:11.630346 containerd[1622]: time="2025-12-16T03:17:11.630233335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:11.637991 containerd[1622]: time="2025-12-16T03:17:11.634881192Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 16 03:17:11.637991 containerd[1622]: time="2025-12-16T03:17:11.635032722Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:11.643659 containerd[1622]: time="2025-12-16T03:17:11.643580265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:11.649339 containerd[1622]: time="2025-12-16T03:17:11.649257659Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 5.26540183s" Dec 16 03:17:11.649555 containerd[1622]: time="2025-12-16T03:17:11.649533928Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 03:17:11.651783 containerd[1622]: time="2025-12-16T03:17:11.651139628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 03:17:11.759997 kubelet[2204]: E1216 03:17:11.756459 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:17:11.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:17:11.768228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:17:11.768476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:17:11.769060 systemd[1]: kubelet.service: Consumed 368ms CPU time, 108.6M memory peak. Dec 16 03:17:11.775031 kernel: audit: type=1131 audit(1765855031.766:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:17:13.129096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354917310.mount: Deactivated successfully. Dec 16 03:17:15.872733 containerd[1622]: time="2025-12-16T03:17:15.872622431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:15.873801 containerd[1622]: time="2025-12-16T03:17:15.873751170Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20711460" Dec 16 03:17:15.875302 containerd[1622]: time="2025-12-16T03:17:15.875255654Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:15.878994 containerd[1622]: time="2025-12-16T03:17:15.878932433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:15.879982 containerd[1622]: time="2025-12-16T03:17:15.879938068Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.228745148s" Dec 16 03:17:15.880037 containerd[1622]: time="2025-12-16T03:17:15.879985017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 03:17:15.880497 containerd[1622]: time="2025-12-16T03:17:15.880460371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:17:16.420781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280276789.mount: Deactivated successfully. Dec 16 03:17:16.428887 containerd[1622]: time="2025-12-16T03:17:16.428809237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:17:16.429700 containerd[1622]: time="2025-12-16T03:17:16.429627863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:17:16.430887 containerd[1622]: time="2025-12-16T03:17:16.430850318Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:17:16.433313 containerd[1622]: time="2025-12-16T03:17:16.433256815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:17:16.434006 containerd[1622]: time="2025-12-16T03:17:16.433943501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 553.453583ms" Dec 16 03:17:16.434068 containerd[1622]: time="2025-12-16T03:17:16.434008465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:17:16.434815 containerd[1622]: time="2025-12-16T03:17:16.434769502Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 03:17:17.437904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3573182218.mount: Deactivated successfully. Dec 16 03:17:20.112441 containerd[1622]: time="2025-12-16T03:17:20.112354929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:20.113338 containerd[1622]: time="2025-12-16T03:17:20.113294400Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=47286611" Dec 16 03:17:20.115528 containerd[1622]: time="2025-12-16T03:17:20.115459876Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:20.121420 containerd[1622]: time="2025-12-16T03:17:20.121300800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:20.122492 containerd[1622]: time="2025-12-16T03:17:20.122384995Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.687548185s" Dec 16 03:17:20.122492 containerd[1622]: time="2025-12-16T03:17:20.122469896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 03:17:21.280482 update_engine[1602]: I20251216 03:17:21.280385 1602 update_attempter.cc:509] Updating boot flags... Dec 16 03:17:21.772207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 03:17:21.774474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:17:22.050112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:17:22.065088 kernel: audit: type=1130 audit(1765855042.052:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:22.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:22.070378 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:17:22.254917 kubelet[2371]: E1216 03:17:22.254730 2371 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:17:22.269722 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:17:22.269981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:17:22.280710 kernel: audit: type=1131 audit(1765855042.269:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:17:22.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:17:22.270756 systemd[1]: kubelet.service: Consumed 367ms CPU time, 110.7M memory peak. Dec 16 03:17:24.597817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:17:24.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:24.598096 systemd[1]: kubelet.service: Consumed 367ms CPU time, 110.7M memory peak. Dec 16 03:17:24.604129 kernel: audit: type=1130 audit(1765855044.596:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:24.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:24.606324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:17:24.609993 kernel: audit: type=1131 audit(1765855044.596:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:24.646773 systemd[1]: Reload requested from client PID 2388 ('systemctl') (unit session-8.scope)... Dec 16 03:17:24.646798 systemd[1]: Reloading... Dec 16 03:17:24.758010 zram_generator::config[2434]: No configuration found. Dec 16 03:17:25.804833 systemd[1]: Reloading finished in 1157 ms. Dec 16 03:17:25.855000 audit: BPF prog-id=61 op=LOAD Dec 16 03:17:25.855000 audit: BPF prog-id=51 op=UNLOAD Dec 16 03:17:25.861666 kernel: audit: type=1334 audit(1765855045.855:284): prog-id=61 op=LOAD Dec 16 03:17:25.861778 kernel: audit: type=1334 audit(1765855045.855:285): prog-id=51 op=UNLOAD Dec 16 03:17:25.861835 kernel: audit: type=1334 audit(1765855045.857:286): prog-id=62 op=LOAD Dec 16 03:17:25.857000 audit: BPF prog-id=62 op=LOAD Dec 16 03:17:25.857000 audit: BPF prog-id=63 op=LOAD Dec 16 03:17:25.863997 kernel: audit: type=1334 audit(1765855045.857:287): prog-id=63 op=LOAD Dec 16 03:17:25.857000 audit: BPF prog-id=52 op=UNLOAD Dec 16 03:17:25.868560 kernel: audit: type=1334 audit(1765855045.857:288): prog-id=52 op=UNLOAD Dec 16 03:17:25.868669 kernel: audit: type=1334 audit(1765855045.857:289): prog-id=53 op=UNLOAD Dec 16 03:17:25.857000 audit: BPF prog-id=53 op=UNLOAD Dec 16 03:17:25.857000 audit: BPF prog-id=64 op=LOAD Dec 16 03:17:25.857000 audit: BPF prog-id=57 op=UNLOAD Dec 16 03:17:25.865000 audit: BPF prog-id=65 op=LOAD Dec 16 03:17:25.879000 audit: BPF prog-id=58 op=UNLOAD Dec 16 03:17:25.879000 audit: BPF prog-id=66 op=LOAD Dec 16 03:17:25.879000 audit: BPF prog-id=67 op=LOAD Dec 16 03:17:25.879000 audit: BPF prog-id=59 op=UNLOAD Dec 16 03:17:25.879000 audit: BPF prog-id=60 op=UNLOAD Dec 16 03:17:25.881000 audit: BPF prog-id=68 op=LOAD Dec 16 03:17:25.881000 audit: BPF prog-id=56 op=UNLOAD Dec 16 03:17:25.884000 audit: BPF prog-id=69 op=LOAD Dec 16 03:17:25.884000 audit: BPF prog-id=48 op=UNLOAD Dec 16 03:17:25.884000 audit: BPF prog-id=70 op=LOAD Dec 16 03:17:25.884000 audit: BPF prog-id=71 op=LOAD Dec 16 03:17:25.884000 audit: BPF prog-id=49 op=UNLOAD Dec 16 03:17:25.884000 audit: BPF prog-id=50 op=UNLOAD Dec 16 03:17:25.885000 audit: BPF prog-id=72 op=LOAD Dec 16 03:17:25.885000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:17:25.885000 audit: BPF prog-id=73 op=LOAD Dec 16 03:17:25.885000 audit: BPF prog-id=74 op=LOAD Dec 16 03:17:25.885000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:17:25.885000 audit: BPF prog-id=47 op=UNLOAD Dec 16 03:17:25.892000 audit: BPF prog-id=75 op=LOAD Dec 16 03:17:25.892000 audit: BPF prog-id=76 op=LOAD Dec 16 03:17:25.892000 audit: BPF prog-id=54 op=UNLOAD Dec 16 03:17:25.893000 audit: BPF prog-id=55 op=UNLOAD Dec 16 03:17:25.893000 audit: BPF prog-id=77 op=LOAD Dec 16 03:17:25.893000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:17:25.893000 audit: BPF prog-id=78 op=LOAD Dec 16 03:17:25.893000 audit: BPF prog-id=79 op=LOAD Dec 16 03:17:25.893000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:17:25.893000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:17:25.895000 audit: BPF prog-id=80 op=LOAD Dec 16 03:17:25.895000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:17:26.012530 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:17:26.012942 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:17:26.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:17:26.015018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:17:26.015097 systemd[1]: kubelet.service: Consumed 186ms CPU time, 98.5M memory peak. Dec 16 03:17:26.022302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:17:26.254899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:17:26.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:26.268358 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:17:26.319015 kubelet[2481]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:17:26.319015 kubelet[2481]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:17:26.319015 kubelet[2481]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:17:26.319509 kubelet[2481]: I1216 03:17:26.319032 2481 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:17:26.520253 kubelet[2481]: I1216 03:17:26.519592 2481 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:17:26.520253 kubelet[2481]: I1216 03:17:26.519696 2481 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:17:26.521034 kubelet[2481]: I1216 03:17:26.520531 2481 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:17:26.547176 kubelet[2481]: I1216 03:17:26.547124 2481 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:17:26.547324 kubelet[2481]: E1216 03:17:26.547250 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 03:17:26.553518 kubelet[2481]: I1216 03:17:26.553482 2481 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:17:26.559296 kubelet[2481]: I1216 03:17:26.559272 2481 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:17:26.559539 kubelet[2481]: I1216 03:17:26.559496 2481 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:17:26.559690 kubelet[2481]: I1216 03:17:26.559521 2481 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:17:26.559818 kubelet[2481]: I1216 03:17:26.559696 2481 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:17:26.559818 kubelet[2481]: I1216 03:17:26.559706 2481 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:17:26.559887 kubelet[2481]: I1216 03:17:26.559870 2481 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:17:26.562208 kubelet[2481]: I1216 03:17:26.562182 2481 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:17:26.562256 kubelet[2481]: I1216 03:17:26.562220 2481 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:17:26.562256 kubelet[2481]: I1216 03:17:26.562244 2481 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:17:26.562295 kubelet[2481]: I1216 03:17:26.562258 2481 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:17:26.572162 kubelet[2481]: E1216 03:17:26.572055 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:17:26.572162 kubelet[2481]: E1216 03:17:26.572093 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:17:26.572355 kubelet[2481]: I1216 03:17:26.572214 2481 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:17:26.572814 kubelet[2481]: I1216 03:17:26.572751 2481 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:17:26.573423 kubelet[2481]: W1216 03:17:26.573386 2481 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:17:26.576700 kubelet[2481]: I1216 03:17:26.576668 2481 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:17:26.576740 kubelet[2481]: I1216 03:17:26.576734 2481 server.go:1289] "Started kubelet" Dec 16 03:17:26.580132 kubelet[2481]: I1216 03:17:26.578223 2481 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:17:26.580132 kubelet[2481]: I1216 03:17:26.579299 2481 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:17:26.580599 kubelet[2481]: I1216 03:17:26.580573 2481 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:17:26.581428 kubelet[2481]: I1216 03:17:26.581404 2481 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:17:26.582987 kubelet[2481]: E1216 03:17:26.581855 2481 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188193cf1808ec31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 03:17:26.576696369 +0000 UTC m=+0.302307758,LastTimestamp:2025-12-16 03:17:26.576696369 +0000 UTC m=+0.302307758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 03:17:26.583778 kubelet[2481]: I1216 03:17:26.583718 2481 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:17:26.584110 kubelet[2481]: E1216 03:17:26.583968 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:26.584110 kubelet[2481]: I1216 03:17:26.583996 2481 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:17:26.584110 kubelet[2481]: I1216 03:17:26.583998 2481 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:17:26.584110 kubelet[2481]: I1216 03:17:26.584066 2481 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:17:26.584110 kubelet[2481]: I1216 03:17:26.584116 2481 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:17:26.584265 kubelet[2481]: E1216 03:17:26.584220 2481 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:17:26.584472 kubelet[2481]: E1216 03:17:26.584441 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:17:26.584775 kubelet[2481]: E1216 03:17:26.584733 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Dec 16 03:17:26.585404 kubelet[2481]: I1216 03:17:26.585373 2481 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:17:26.585404 kubelet[2481]: I1216 03:17:26.585387 2481 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:17:26.585472 kubelet[2481]: I1216 03:17:26.585454 2481 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:17:26.588000 audit[2498]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:26.588000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd3d1bac90 a2=0 a3=0 items=0 ppid=2481 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:17:26.589000 audit[2499]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:26.589000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1ea5fff0 a2=0 a3=0 items=0 ppid=2481 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:17:26.592000 audit[2501]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:26.592000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd389b4460 a2=0 a3=0 items=0 ppid=2481 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:17:26.595000 audit[2503]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:26.595000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe8a97b5b0 a2=0 a3=0 items=0 ppid=2481 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:17:26.601260 kubelet[2481]: I1216 03:17:26.601229 2481 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:17:26.601260 kubelet[2481]: I1216 03:17:26.601247 2481 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:17:26.601260 kubelet[2481]: I1216 03:17:26.601266 2481 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:17:26.604000 audit[2510]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:26.604000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd32036860 a2=0 a3=0 items=0 ppid=2481 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:17:26.606398 kubelet[2481]: I1216 03:17:26.606333 2481 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:17:26.606000 audit[2512]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:26.606000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcd4cd3960 a2=0 a3=0 items=0 ppid=2481 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:17:26.607801 kubelet[2481]: I1216 03:17:26.607775 2481 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:17:26.607880 kubelet[2481]: I1216 03:17:26.607867 2481 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:17:26.608005 kubelet[2481]: I1216 03:17:26.607985 2481 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:17:26.608083 kubelet[2481]: I1216 03:17:26.608070 2481 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:17:26.607000 audit[2513]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:26.607000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc827748f0 a2=0 a3=0 items=0 ppid=2481 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:17:26.608371 kubelet[2481]: E1216 03:17:26.608190 2481 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:17:26.607000 audit[2514]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:26.607000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd067c86f0 a2=0 a3=0 items=0 ppid=2481 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:17:26.608000 audit[2515]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:26.608000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0a6a9460 a2=0 a3=0 items=0 ppid=2481 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.608000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:17:26.609000 audit[2517]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:26.609000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd64e8e580 a2=0 a3=0 items=0 ppid=2481 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:17:26.610000 audit[2519]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:26.610000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe850c9ed0 a2=0 a3=0 items=0 ppid=2481 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:17:26.610000 audit[2520]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:26.610000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea7ffff70 a2=0 a3=0 items=0 ppid=2481 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:26.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:17:26.684243 kubelet[2481]: E1216 03:17:26.684161 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:26.709603 kubelet[2481]: E1216 03:17:26.709515 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:17:26.785192 kubelet[2481]: E1216 03:17:26.785136 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:26.785674 kubelet[2481]: E1216 03:17:26.785641 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Dec 16 03:17:26.886170 kubelet[2481]: E1216 03:17:26.886050 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:26.910330 kubelet[2481]: E1216 03:17:26.910259 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:17:26.986794 kubelet[2481]: E1216 03:17:26.986696 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:27.087919 kubelet[2481]: E1216 03:17:27.087752 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:27.186672 kubelet[2481]: E1216 03:17:27.186604 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Dec 16 03:17:27.188731 kubelet[2481]: E1216 03:17:27.188656 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:27.289125 kubelet[2481]: E1216 03:17:27.289072 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:27.311333 kubelet[2481]: E1216 03:17:27.311267 2481 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:17:27.355845 kubelet[2481]: E1216 03:17:27.355695 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:17:27.359416 kubelet[2481]: I1216 03:17:27.359364 2481 policy_none.go:49] "None policy: Start" Dec 16 03:17:27.359416 kubelet[2481]: I1216 03:17:27.359410 2481 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:17:27.359539 kubelet[2481]: I1216 03:17:27.359431 2481 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:17:27.373973 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:17:27.389643 kubelet[2481]: E1216 03:17:27.389564 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:17:27.394969 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:17:27.413359 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:17:27.415164 kubelet[2481]: E1216 03:17:27.415135 2481 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:17:27.415439 kubelet[2481]: I1216 03:17:27.415404 2481 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:17:27.415481 kubelet[2481]: I1216 03:17:27.415449 2481 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:17:27.415809 kubelet[2481]: I1216 03:17:27.415761 2481 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:17:27.420021 kubelet[2481]: E1216 03:17:27.419977 2481 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:17:27.420715 kubelet[2481]: E1216 03:17:27.420666 2481 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 03:17:27.497192 kubelet[2481]: E1216 03:17:27.497124 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:17:27.518185 kubelet[2481]: I1216 03:17:27.518122 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:17:27.518603 kubelet[2481]: E1216 03:17:27.518560 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Dec 16 03:17:27.725589 kubelet[2481]: I1216 03:17:27.722053 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:17:27.725589 kubelet[2481]: E1216 03:17:27.722468 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Dec 16 03:17:27.799343 kubelet[2481]: E1216 03:17:27.799250 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:17:27.849213 kubelet[2481]: E1216 03:17:27.849127 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:17:27.995149 kubelet[2481]: E1216 03:17:27.992271 2481 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" Dec 16 03:17:28.124607 kubelet[2481]: I1216 03:17:28.124468 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:17:28.125094 kubelet[2481]: E1216 03:17:28.125037 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Dec 16 03:17:28.154425 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 03:17:28.195835 kubelet[2481]: I1216 03:17:28.195667 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/496aad6b3bec41d6a2adce0d3cd1eabf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"496aad6b3bec41d6a2adce0d3cd1eabf\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:28.195835 kubelet[2481]: I1216 03:17:28.195729 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:28.195835 kubelet[2481]: I1216 03:17:28.195751 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 03:17:28.197224 kubelet[2481]: I1216 03:17:28.195770 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/496aad6b3bec41d6a2adce0d3cd1eabf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"496aad6b3bec41d6a2adce0d3cd1eabf\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:28.197224 kubelet[2481]: I1216 03:17:28.196170 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:28.197224 kubelet[2481]: I1216 03:17:28.196195 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:28.197224 kubelet[2481]: I1216 03:17:28.196214 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:28.197224 kubelet[2481]: I1216 03:17:28.196233 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:28.197462 kubelet[2481]: I1216 03:17:28.196252 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/496aad6b3bec41d6a2adce0d3cd1eabf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"496aad6b3bec41d6a2adce0d3cd1eabf\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:28.201104 kubelet[2481]: E1216 03:17:28.200241 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:28.213034 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 03:17:28.217418 kubelet[2481]: E1216 03:17:28.217380 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:28.219975 systemd[1]: Created slice kubepods-burstable-pod496aad6b3bec41d6a2adce0d3cd1eabf.slice - libcontainer container kubepods-burstable-pod496aad6b3bec41d6a2adce0d3cd1eabf.slice. Dec 16 03:17:28.222840 kubelet[2481]: E1216 03:17:28.222794 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:28.501333 kubelet[2481]: E1216 03:17:28.501270 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:28.502501 containerd[1622]: time="2025-12-16T03:17:28.502433658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 03:17:28.518835 kubelet[2481]: E1216 03:17:28.518782 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:28.519376 containerd[1622]: time="2025-12-16T03:17:28.519330373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 03:17:28.636946 kubelet[2481]: E1216 03:17:28.636901 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:28.637849 containerd[1622]: time="2025-12-16T03:17:28.637522231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:496aad6b3bec41d6a2adce0d3cd1eabf,Namespace:kube-system,Attempt:0,}" Dec 16 03:17:28.637948 kubelet[2481]: E1216 03:17:28.637920 2481 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 03:17:28.657273 containerd[1622]: time="2025-12-16T03:17:28.657098853Z" level=info msg="connecting to shim 8b0736aba05049b91b9aabfce8e2d66d1ea09f090a2de87dad23636eb25f6f36" address="unix:///run/containerd/s/f1fd02b409fcc0c1a0db8a8824c6b2b32fc19c4798a8e87ff9194b5a41fb73b1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:28.659130 containerd[1622]: time="2025-12-16T03:17:28.659075305Z" level=info msg="connecting to shim 9ca7e008898feb5458024c25918f5789c8c3458e8e237efb7cf207ee168bab64" address="unix:///run/containerd/s/4ca11bc9a63abe37eed055165111cb3247cdd7f9fd6105edda2980233de9d732" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:28.726798 containerd[1622]: time="2025-12-16T03:17:28.726720649Z" level=info msg="connecting to shim 57ace64f8bdaef97093ca9bebd4f650561e005e22e8b4f1ab6dace5da8eec144" address="unix:///run/containerd/s/789a19e5056d5187d3b5a51aa62361ad77ae8b2a631baa63d02cb3241dfd0cdd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:28.743250 systemd[1]: Started cri-containerd-9ca7e008898feb5458024c25918f5789c8c3458e8e237efb7cf207ee168bab64.scope - libcontainer container 9ca7e008898feb5458024c25918f5789c8c3458e8e237efb7cf207ee168bab64. Dec 16 03:17:28.748814 systemd[1]: Started cri-containerd-8b0736aba05049b91b9aabfce8e2d66d1ea09f090a2de87dad23636eb25f6f36.scope - libcontainer container 8b0736aba05049b91b9aabfce8e2d66d1ea09f090a2de87dad23636eb25f6f36. Dec 16 03:17:28.809193 kernel: kauditd_printk_skb: 72 callbacks suppressed Dec 16 03:17:28.809272 kernel: audit: type=1334 audit(1765855048.805:338): prog-id=81 op=LOAD Dec 16 03:17:28.805000 audit: BPF prog-id=81 op=LOAD Dec 16 03:17:28.814846 kernel: audit: type=1334 audit(1765855048.805:339): prog-id=82 op=LOAD Dec 16 03:17:28.805000 audit: BPF prog-id=82 op=LOAD Dec 16 03:17:28.805000 audit[2557]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.822047 kernel: audit: type=1300 audit(1765855048.805:339): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.832507 kernel: audit: type=1327 audit(1765855048.805:339): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.832586 kernel: audit: type=1334 audit(1765855048.805:340): prog-id=82 op=UNLOAD Dec 16 03:17:28.805000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:17:28.805000 audit[2557]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.838349 kernel: audit: type=1300 audit(1765855048.805:340): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.843426 kernel: audit: type=1327 audit(1765855048.805:340): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.844857 kernel: audit: type=1334 audit(1765855048.806:341): prog-id=83 op=LOAD Dec 16 03:17:28.806000 audit: BPF prog-id=83 op=LOAD Dec 16 03:17:28.844265 systemd[1]: Started cri-containerd-57ace64f8bdaef97093ca9bebd4f650561e005e22e8b4f1ab6dace5da8eec144.scope - libcontainer container 57ace64f8bdaef97093ca9bebd4f650561e005e22e8b4f1ab6dace5da8eec144. Dec 16 03:17:28.806000 audit[2557]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.855621 kernel: audit: type=1300 audit(1765855048.806:341): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.855691 kernel: audit: type=1327 audit(1765855048.806:341): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.806000 audit: BPF prog-id=84 op=LOAD Dec 16 03:17:28.806000 audit[2557]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.806000 audit: BPF prog-id=84 op=UNLOAD Dec 16 03:17:28.806000 audit[2557]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.806000 audit: BPF prog-id=83 op=UNLOAD Dec 16 03:17:28.806000 audit[2557]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.806000 audit: BPF prog-id=85 op=LOAD Dec 16 03:17:28.806000 audit[2557]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2537 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613765303038383938666562353435383032346332353931386635 Dec 16 03:17:28.806000 audit: BPF prog-id=86 op=LOAD Dec 16 03:17:28.808000 audit: BPF prog-id=87 op=LOAD Dec 16 03:17:28.808000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2539 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862303733366162613035303439623931623961616266636538653264 Dec 16 03:17:28.808000 audit: BPF prog-id=87 op=UNLOAD Dec 16 03:17:28.808000 audit[2562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862303733366162613035303439623931623961616266636538653264 Dec 16 03:17:28.808000 audit: BPF prog-id=88 op=LOAD Dec 16 03:17:28.808000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2539 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862303733366162613035303439623931623961616266636538653264 Dec 16 03:17:28.808000 audit: BPF prog-id=89 op=LOAD Dec 16 03:17:28.808000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2539 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862303733366162613035303439623931623961616266636538653264 Dec 16 03:17:28.808000 audit: BPF prog-id=89 op=UNLOAD Dec 16 03:17:28.808000 audit[2562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862303733366162613035303439623931623961616266636538653264 Dec 16 03:17:28.808000 audit: BPF prog-id=88 op=UNLOAD Dec 16 03:17:28.808000 audit[2562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862303733366162613035303439623931623961616266636538653264 Dec 16 03:17:28.808000 audit: BPF prog-id=90 op=LOAD Dec 16 03:17:28.808000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2539 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862303733366162613035303439623931623961616266636538653264 Dec 16 03:17:28.887000 audit: BPF prog-id=91 op=LOAD Dec 16 03:17:28.888000 audit: BPF prog-id=92 op=LOAD Dec 16 03:17:28.888000 audit[2608]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2575 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616365363466386264616566393730393363613962656264346636 Dec 16 03:17:28.890000 audit: BPF prog-id=92 op=UNLOAD Dec 16 03:17:28.890000 audit[2608]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616365363466386264616566393730393363613962656264346636 Dec 16 03:17:28.890000 audit: BPF prog-id=93 op=LOAD Dec 16 03:17:28.890000 audit[2608]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2575 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616365363466386264616566393730393363613962656264346636 Dec 16 03:17:28.890000 audit: BPF prog-id=94 op=LOAD Dec 16 03:17:28.890000 audit[2608]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2575 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616365363466386264616566393730393363613962656264346636 Dec 16 03:17:28.890000 audit: BPF prog-id=94 op=UNLOAD Dec 16 03:17:28.890000 audit[2608]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616365363466386264616566393730393363613962656264346636 Dec 16 03:17:28.891000 audit: BPF prog-id=93 op=UNLOAD Dec 16 03:17:28.891000 audit[2608]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616365363466386264616566393730393363613962656264346636 Dec 16 03:17:28.891000 audit: BPF prog-id=95 op=LOAD Dec 16 03:17:28.891000 audit[2608]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2575 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:28.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616365363466386264616566393730393363613962656264346636 Dec 16 03:17:28.895257 containerd[1622]: time="2025-12-16T03:17:28.895218862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ca7e008898feb5458024c25918f5789c8c3458e8e237efb7cf207ee168bab64\"" Dec 16 03:17:28.896720 kubelet[2481]: E1216 03:17:28.896416 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:28.900585 containerd[1622]: time="2025-12-16T03:17:28.900536279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b0736aba05049b91b9aabfce8e2d66d1ea09f090a2de87dad23636eb25f6f36\"" Dec 16 03:17:28.901398 kubelet[2481]: E1216 03:17:28.901372 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:28.921204 kubelet[2481]: E1216 03:17:28.921161 2481 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:17:28.927270 kubelet[2481]: I1216 03:17:28.927239 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:17:28.927609 kubelet[2481]: E1216 03:17:28.927581 2481 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Dec 16 03:17:28.935646 containerd[1622]: time="2025-12-16T03:17:28.935587721Z" level=info msg="CreateContainer within sandbox \"9ca7e008898feb5458024c25918f5789c8c3458e8e237efb7cf207ee168bab64\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:17:28.936120 containerd[1622]: time="2025-12-16T03:17:28.936081914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:496aad6b3bec41d6a2adce0d3cd1eabf,Namespace:kube-system,Attempt:0,} returns sandbox id \"57ace64f8bdaef97093ca9bebd4f650561e005e22e8b4f1ab6dace5da8eec144\"" Dec 16 03:17:28.936772 kubelet[2481]: E1216 03:17:28.936740 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:28.938379 containerd[1622]: time="2025-12-16T03:17:28.938340076Z" level=info msg="CreateContainer within sandbox \"8b0736aba05049b91b9aabfce8e2d66d1ea09f090a2de87dad23636eb25f6f36\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:17:28.946264 containerd[1622]: time="2025-12-16T03:17:28.946208478Z" level=info msg="Container 914f2ca302c9ce12511d46104145e23d653302376a0d037c3bc92a312822e697: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:28.951458 containerd[1622]: time="2025-12-16T03:17:28.951397193Z" level=info msg="CreateContainer within sandbox \"57ace64f8bdaef97093ca9bebd4f650561e005e22e8b4f1ab6dace5da8eec144\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:17:28.956699 containerd[1622]: time="2025-12-16T03:17:28.956662361Z" level=info msg="Container 59ffe1c4106ee4343c72d9c4ab83ee485a1e8fdf1a779f6d9a63d4b297a85b9d: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:28.964204 containerd[1622]: time="2025-12-16T03:17:28.964151337Z" level=info msg="CreateContainer within sandbox \"9ca7e008898feb5458024c25918f5789c8c3458e8e237efb7cf207ee168bab64\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"914f2ca302c9ce12511d46104145e23d653302376a0d037c3bc92a312822e697\"" Dec 16 03:17:28.964923 containerd[1622]: time="2025-12-16T03:17:28.964890823Z" level=info msg="StartContainer for \"914f2ca302c9ce12511d46104145e23d653302376a0d037c3bc92a312822e697\"" Dec 16 03:17:28.966233 containerd[1622]: time="2025-12-16T03:17:28.966195856Z" level=info msg="connecting to shim 914f2ca302c9ce12511d46104145e23d653302376a0d037c3bc92a312822e697" address="unix:///run/containerd/s/4ca11bc9a63abe37eed055165111cb3247cdd7f9fd6105edda2980233de9d732" protocol=ttrpc version=3 Dec 16 03:17:28.970921 containerd[1622]: time="2025-12-16T03:17:28.970877974Z" level=info msg="CreateContainer within sandbox \"8b0736aba05049b91b9aabfce8e2d66d1ea09f090a2de87dad23636eb25f6f36\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59ffe1c4106ee4343c72d9c4ab83ee485a1e8fdf1a779f6d9a63d4b297a85b9d\"" Dec 16 03:17:28.971365 containerd[1622]: time="2025-12-16T03:17:28.971321311Z" level=info msg="StartContainer for \"59ffe1c4106ee4343c72d9c4ab83ee485a1e8fdf1a779f6d9a63d4b297a85b9d\"" Dec 16 03:17:28.972697 containerd[1622]: time="2025-12-16T03:17:28.972631284Z" level=info msg="connecting to shim 59ffe1c4106ee4343c72d9c4ab83ee485a1e8fdf1a779f6d9a63d4b297a85b9d" address="unix:///run/containerd/s/f1fd02b409fcc0c1a0db8a8824c6b2b32fc19c4798a8e87ff9194b5a41fb73b1" protocol=ttrpc version=3 Dec 16 03:17:28.976903 containerd[1622]: time="2025-12-16T03:17:28.976828476Z" level=info msg="Container 1c6cbc161ef8b6b58c325ff8128017b122be6190709399649000912e81a472d8: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:28.987171 systemd[1]: Started cri-containerd-914f2ca302c9ce12511d46104145e23d653302376a0d037c3bc92a312822e697.scope - libcontainer container 914f2ca302c9ce12511d46104145e23d653302376a0d037c3bc92a312822e697. Dec 16 03:17:28.992090 systemd[1]: Started cri-containerd-59ffe1c4106ee4343c72d9c4ab83ee485a1e8fdf1a779f6d9a63d4b297a85b9d.scope - libcontainer container 59ffe1c4106ee4343c72d9c4ab83ee485a1e8fdf1a779f6d9a63d4b297a85b9d. Dec 16 03:17:28.993079 containerd[1622]: time="2025-12-16T03:17:28.993028855Z" level=info msg="CreateContainer within sandbox \"57ace64f8bdaef97093ca9bebd4f650561e005e22e8b4f1ab6dace5da8eec144\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1c6cbc161ef8b6b58c325ff8128017b122be6190709399649000912e81a472d8\"" Dec 16 03:17:28.993749 containerd[1622]: time="2025-12-16T03:17:28.993722124Z" level=info msg="StartContainer for \"1c6cbc161ef8b6b58c325ff8128017b122be6190709399649000912e81a472d8\"" Dec 16 03:17:28.995325 containerd[1622]: time="2025-12-16T03:17:28.994946355Z" level=info msg="connecting to shim 1c6cbc161ef8b6b58c325ff8128017b122be6190709399649000912e81a472d8" address="unix:///run/containerd/s/789a19e5056d5187d3b5a51aa62361ad77ae8b2a631baa63d02cb3241dfd0cdd" protocol=ttrpc version=3 Dec 16 03:17:29.016299 systemd[1]: Started cri-containerd-1c6cbc161ef8b6b58c325ff8128017b122be6190709399649000912e81a472d8.scope - libcontainer container 1c6cbc161ef8b6b58c325ff8128017b122be6190709399649000912e81a472d8. Dec 16 03:17:29.018000 audit: BPF prog-id=96 op=LOAD Dec 16 03:17:29.019000 audit: BPF prog-id=97 op=LOAD Dec 16 03:17:29.019000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2537 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632636133303263396365313235313164343631303431343565 Dec 16 03:17:29.019000 audit: BPF prog-id=97 op=UNLOAD Dec 16 03:17:29.019000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632636133303263396365313235313164343631303431343565 Dec 16 03:17:29.019000 audit: BPF prog-id=98 op=LOAD Dec 16 03:17:29.019000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2537 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632636133303263396365313235313164343631303431343565 Dec 16 03:17:29.019000 audit: BPF prog-id=99 op=LOAD Dec 16 03:17:29.019000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2537 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632636133303263396365313235313164343631303431343565 Dec 16 03:17:29.019000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:17:29.019000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632636133303263396365313235313164343631303431343565 Dec 16 03:17:29.019000 audit: BPF prog-id=98 op=UNLOAD Dec 16 03:17:29.019000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632636133303263396365313235313164343631303431343565 Dec 16 03:17:29.019000 audit: BPF prog-id=100 op=LOAD Dec 16 03:17:29.019000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2537 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632636133303263396365313235313164343631303431343565 Dec 16 03:17:29.027000 audit: BPF prog-id=101 op=LOAD Dec 16 03:17:29.027000 audit: BPF prog-id=102 op=LOAD Dec 16 03:17:29.027000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2539 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666665316334313036656534333433633732643963346162383365 Dec 16 03:17:29.028000 audit: BPF prog-id=102 op=UNLOAD Dec 16 03:17:29.028000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666665316334313036656534333433633732643963346162383365 Dec 16 03:17:29.028000 audit: BPF prog-id=103 op=LOAD Dec 16 03:17:29.028000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2539 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666665316334313036656534333433633732643963346162383365 Dec 16 03:17:29.028000 audit: BPF prog-id=104 op=LOAD Dec 16 03:17:29.028000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2539 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666665316334313036656534333433633732643963346162383365 Dec 16 03:17:29.028000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:17:29.028000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666665316334313036656534333433633732643963346162383365 Dec 16 03:17:29.028000 audit: BPF prog-id=103 op=UNLOAD Dec 16 03:17:29.028000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666665316334313036656534333433633732643963346162383365 Dec 16 03:17:29.028000 audit: BPF prog-id=105 op=LOAD Dec 16 03:17:29.028000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2539 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666665316334313036656534333433633732643963346162383365 Dec 16 03:17:29.033000 audit: BPF prog-id=106 op=LOAD Dec 16 03:17:29.033000 audit: BPF prog-id=107 op=LOAD Dec 16 03:17:29.033000 audit[2691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2575 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163366362633136316566386236623538633332356666383132383031 Dec 16 03:17:29.033000 audit: BPF prog-id=107 op=UNLOAD Dec 16 03:17:29.033000 audit[2691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163366362633136316566386236623538633332356666383132383031 Dec 16 03:17:29.033000 audit: BPF prog-id=108 op=LOAD Dec 16 03:17:29.033000 audit[2691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2575 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163366362633136316566386236623538633332356666383132383031 Dec 16 03:17:29.033000 audit: BPF prog-id=109 op=LOAD Dec 16 03:17:29.033000 audit[2691]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2575 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163366362633136316566386236623538633332356666383132383031 Dec 16 03:17:29.033000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:17:29.033000 audit[2691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163366362633136316566386236623538633332356666383132383031 Dec 16 03:17:29.033000 audit: BPF prog-id=108 op=UNLOAD Dec 16 03:17:29.033000 audit[2691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163366362633136316566386236623538633332356666383132383031 Dec 16 03:17:29.033000 audit: BPF prog-id=110 op=LOAD Dec 16 03:17:29.033000 audit[2691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2575 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:29.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163366362633136316566386236623538633332356666383132383031 Dec 16 03:17:29.103001 containerd[1622]: time="2025-12-16T03:17:29.102515582Z" level=info msg="StartContainer for \"914f2ca302c9ce12511d46104145e23d653302376a0d037c3bc92a312822e697\" returns successfully" Dec 16 03:17:29.110158 containerd[1622]: time="2025-12-16T03:17:29.110107918Z" level=info msg="StartContainer for \"59ffe1c4106ee4343c72d9c4ab83ee485a1e8fdf1a779f6d9a63d4b297a85b9d\" returns successfully" Dec 16 03:17:29.120454 containerd[1622]: time="2025-12-16T03:17:29.120407831Z" level=info msg="StartContainer for \"1c6cbc161ef8b6b58c325ff8128017b122be6190709399649000912e81a472d8\" returns successfully" Dec 16 03:17:29.643728 kubelet[2481]: E1216 03:17:29.643691 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:29.644184 kubelet[2481]: E1216 03:17:29.643809 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:29.647229 kubelet[2481]: E1216 03:17:29.647206 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:29.647500 kubelet[2481]: E1216 03:17:29.647481 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:29.649038 kubelet[2481]: E1216 03:17:29.649019 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:29.649121 kubelet[2481]: E1216 03:17:29.649105 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:30.529365 kubelet[2481]: I1216 03:17:30.529306 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:17:30.651347 kubelet[2481]: E1216 03:17:30.651309 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:30.651709 kubelet[2481]: E1216 03:17:30.651433 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:30.651709 kubelet[2481]: E1216 03:17:30.651637 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:30.651805 kubelet[2481]: E1216 03:17:30.651751 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:30.652051 kubelet[2481]: E1216 03:17:30.652028 2481 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:17:30.652118 kubelet[2481]: E1216 03:17:30.652106 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:31.118913 kubelet[2481]: E1216 03:17:31.118859 2481 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 03:17:31.219217 kubelet[2481]: I1216 03:17:31.219163 2481 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 03:17:31.285102 kubelet[2481]: I1216 03:17:31.285031 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:31.319694 kubelet[2481]: E1216 03:17:31.319628 2481 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:31.319694 kubelet[2481]: I1216 03:17:31.319667 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:17:31.321437 kubelet[2481]: E1216 03:17:31.321391 2481 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 03:17:31.321437 kubelet[2481]: I1216 03:17:31.321435 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:31.323452 kubelet[2481]: E1216 03:17:31.323402 2481 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:31.639046 kubelet[2481]: I1216 03:17:31.638978 2481 apiserver.go:52] "Watching apiserver" Dec 16 03:17:31.684567 kubelet[2481]: I1216 03:17:31.684509 2481 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:17:34.432928 systemd[1]: Reload requested from client PID 2768 ('systemctl') (unit session-8.scope)... Dec 16 03:17:34.432965 systemd[1]: Reloading... Dec 16 03:17:34.561002 zram_generator::config[2815]: No configuration found. Dec 16 03:17:34.863364 systemd[1]: Reloading finished in 429 ms. Dec 16 03:17:34.898282 kubelet[2481]: I1216 03:17:34.898187 2481 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:17:34.898539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:17:34.930331 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:17:34.930757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:17:34.930824 systemd[1]: kubelet.service: Consumed 959ms CPU time, 128.1M memory peak. Dec 16 03:17:34.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:34.932145 kernel: kauditd_printk_skb: 122 callbacks suppressed Dec 16 03:17:34.932208 kernel: audit: type=1131 audit(1765855054.928:386): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:34.932826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:17:34.931000 audit: BPF prog-id=111 op=LOAD Dec 16 03:17:34.938489 kernel: audit: type=1334 audit(1765855054.931:387): prog-id=111 op=LOAD Dec 16 03:17:34.938533 kernel: audit: type=1334 audit(1765855054.931:388): prog-id=72 op=UNLOAD Dec 16 03:17:34.938552 kernel: audit: type=1334 audit(1765855054.931:389): prog-id=112 op=LOAD Dec 16 03:17:34.938584 kernel: audit: type=1334 audit(1765855054.931:390): prog-id=113 op=LOAD Dec 16 03:17:34.938606 kernel: audit: type=1334 audit(1765855054.931:391): prog-id=73 op=UNLOAD Dec 16 03:17:34.938627 kernel: audit: type=1334 audit(1765855054.931:392): prog-id=74 op=UNLOAD Dec 16 03:17:34.931000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:17:34.931000 audit: BPF prog-id=112 op=LOAD Dec 16 03:17:34.931000 audit: BPF prog-id=113 op=LOAD Dec 16 03:17:34.931000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:17:34.931000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:17:34.931000 audit: BPF prog-id=114 op=LOAD Dec 16 03:17:34.947890 kernel: audit: type=1334 audit(1765855054.931:393): prog-id=114 op=LOAD Dec 16 03:17:34.947928 kernel: audit: type=1334 audit(1765855054.931:394): prog-id=77 op=UNLOAD Dec 16 03:17:34.931000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:17:34.949599 kernel: audit: type=1334 audit(1765855054.931:395): prog-id=115 op=LOAD Dec 16 03:17:34.931000 audit: BPF prog-id=115 op=LOAD Dec 16 03:17:34.931000 audit: BPF prog-id=116 op=LOAD Dec 16 03:17:34.931000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:17:34.931000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:17:34.935000 audit: BPF prog-id=117 op=LOAD Dec 16 03:17:34.935000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:17:34.938000 audit: BPF prog-id=118 op=LOAD Dec 16 03:17:34.938000 audit: BPF prog-id=61 op=UNLOAD Dec 16 03:17:34.938000 audit: BPF prog-id=119 op=LOAD Dec 16 03:17:34.938000 audit: BPF prog-id=120 op=LOAD Dec 16 03:17:34.938000 audit: BPF prog-id=62 op=UNLOAD Dec 16 03:17:34.938000 audit: BPF prog-id=63 op=UNLOAD Dec 16 03:17:34.955000 audit: BPF prog-id=121 op=LOAD Dec 16 03:17:34.955000 audit: BPF prog-id=64 op=UNLOAD Dec 16 03:17:34.956000 audit: BPF prog-id=122 op=LOAD Dec 16 03:17:34.956000 audit: BPF prog-id=123 op=LOAD Dec 16 03:17:34.956000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:17:34.956000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:17:34.957000 audit: BPF prog-id=124 op=LOAD Dec 16 03:17:34.957000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:17:34.957000 audit: BPF prog-id=125 op=LOAD Dec 16 03:17:34.958000 audit: BPF prog-id=126 op=LOAD Dec 16 03:17:34.958000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:17:34.958000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:17:34.960000 audit: BPF prog-id=127 op=LOAD Dec 16 03:17:34.960000 audit: BPF prog-id=65 op=UNLOAD Dec 16 03:17:34.960000 audit: BPF prog-id=128 op=LOAD Dec 16 03:17:34.960000 audit: BPF prog-id=129 op=LOAD Dec 16 03:17:34.960000 audit: BPF prog-id=66 op=UNLOAD Dec 16 03:17:34.960000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:17:34.961000 audit: BPF prog-id=130 op=LOAD Dec 16 03:17:34.961000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:17:35.153518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:17:35.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:35.170382 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:17:35.219254 kubelet[2859]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:17:35.219254 kubelet[2859]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:17:35.219254 kubelet[2859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:17:35.219622 kubelet[2859]: I1216 03:17:35.219317 2859 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:17:35.226148 kubelet[2859]: I1216 03:17:35.225479 2859 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:17:35.226148 kubelet[2859]: I1216 03:17:35.225504 2859 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:17:35.226148 kubelet[2859]: I1216 03:17:35.225709 2859 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:17:35.226939 kubelet[2859]: I1216 03:17:35.226924 2859 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 03:17:35.229282 kubelet[2859]: I1216 03:17:35.229152 2859 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:17:35.234880 kubelet[2859]: I1216 03:17:35.234534 2859 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:17:35.239985 kubelet[2859]: I1216 03:17:35.239171 2859 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:17:35.239985 kubelet[2859]: I1216 03:17:35.239430 2859 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:17:35.239985 kubelet[2859]: I1216 03:17:35.239450 2859 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:17:35.239985 kubelet[2859]: I1216 03:17:35.239582 2859 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:17:35.240157 kubelet[2859]: I1216 03:17:35.239592 2859 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:17:35.240716 kubelet[2859]: I1216 03:17:35.240677 2859 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:17:35.240943 kubelet[2859]: I1216 03:17:35.240927 2859 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:17:35.240990 kubelet[2859]: I1216 03:17:35.240944 2859 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:17:35.241025 kubelet[2859]: I1216 03:17:35.240993 2859 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:17:35.241025 kubelet[2859]: I1216 03:17:35.241011 2859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:17:35.243217 kubelet[2859]: I1216 03:17:35.242432 2859 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:17:35.243279 kubelet[2859]: I1216 03:17:35.243245 2859 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:17:35.248158 kubelet[2859]: I1216 03:17:35.248134 2859 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:17:35.248344 kubelet[2859]: I1216 03:17:35.248319 2859 server.go:1289] "Started kubelet" Dec 16 03:17:35.250357 kubelet[2859]: I1216 03:17:35.248987 2859 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:17:35.251235 kubelet[2859]: I1216 03:17:35.249041 2859 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:17:35.252012 kubelet[2859]: I1216 03:17:35.251992 2859 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:17:35.253996 kubelet[2859]: I1216 03:17:35.253844 2859 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:17:35.257462 kubelet[2859]: I1216 03:17:35.250019 2859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:17:35.258091 kubelet[2859]: E1216 03:17:35.258008 2859 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:17:35.259110 kubelet[2859]: I1216 03:17:35.259095 2859 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:17:35.259451 kubelet[2859]: I1216 03:17:35.259431 2859 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:17:35.259615 kubelet[2859]: I1216 03:17:35.259590 2859 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:17:35.259921 kubelet[2859]: I1216 03:17:35.250086 2859 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:17:35.264038 kubelet[2859]: I1216 03:17:35.264007 2859 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:17:35.268639 kubelet[2859]: I1216 03:17:35.268227 2859 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:17:35.268639 kubelet[2859]: I1216 03:17:35.268245 2859 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:17:35.281851 kubelet[2859]: I1216 03:17:35.281815 2859 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:17:35.284078 kubelet[2859]: I1216 03:17:35.284049 2859 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:17:35.284078 kubelet[2859]: I1216 03:17:35.284069 2859 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:17:35.284169 kubelet[2859]: I1216 03:17:35.284090 2859 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:17:35.284169 kubelet[2859]: I1216 03:17:35.284098 2859 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:17:35.284169 kubelet[2859]: E1216 03:17:35.284137 2859 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:17:35.310643 kubelet[2859]: I1216 03:17:35.310514 2859 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:17:35.310643 kubelet[2859]: I1216 03:17:35.310588 2859 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:17:35.310643 kubelet[2859]: I1216 03:17:35.310640 2859 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:17:35.310853 kubelet[2859]: I1216 03:17:35.310805 2859 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:17:35.310853 kubelet[2859]: I1216 03:17:35.310815 2859 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:17:35.310853 kubelet[2859]: I1216 03:17:35.310845 2859 policy_none.go:49] "None policy: Start" Dec 16 03:17:35.310915 kubelet[2859]: I1216 03:17:35.310856 2859 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:17:35.310915 kubelet[2859]: I1216 03:17:35.310868 2859 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:17:35.311045 kubelet[2859]: I1216 03:17:35.311030 2859 state_mem.go:75] "Updated machine memory state" Dec 16 03:17:35.316665 kubelet[2859]: E1216 03:17:35.316645 2859 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:17:35.319200 kubelet[2859]: I1216 03:17:35.318303 2859 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:17:35.319200 kubelet[2859]: I1216 03:17:35.318319 2859 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:17:35.319200 kubelet[2859]: I1216 03:17:35.318899 2859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:17:35.324699 kubelet[2859]: E1216 03:17:35.324661 2859 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:17:35.385942 kubelet[2859]: I1216 03:17:35.385901 2859 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:35.386105 kubelet[2859]: I1216 03:17:35.386027 2859 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:35.389591 kubelet[2859]: I1216 03:17:35.389561 2859 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:17:35.430049 kubelet[2859]: I1216 03:17:35.429875 2859 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:17:35.438320 kubelet[2859]: I1216 03:17:35.438283 2859 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 03:17:35.438472 kubelet[2859]: I1216 03:17:35.438372 2859 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 03:17:35.560655 kubelet[2859]: I1216 03:17:35.560607 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:35.560655 kubelet[2859]: I1216 03:17:35.560646 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:35.560655 kubelet[2859]: I1216 03:17:35.560672 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/496aad6b3bec41d6a2adce0d3cd1eabf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"496aad6b3bec41d6a2adce0d3cd1eabf\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:35.563138 kubelet[2859]: I1216 03:17:35.560688 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/496aad6b3bec41d6a2adce0d3cd1eabf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"496aad6b3bec41d6a2adce0d3cd1eabf\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:35.563138 kubelet[2859]: I1216 03:17:35.560704 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 03:17:35.563138 kubelet[2859]: I1216 03:17:35.560721 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/496aad6b3bec41d6a2adce0d3cd1eabf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"496aad6b3bec41d6a2adce0d3cd1eabf\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:35.563138 kubelet[2859]: I1216 03:17:35.560737 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:35.563138 kubelet[2859]: I1216 03:17:35.560752 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:35.563294 kubelet[2859]: I1216 03:17:35.560790 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:35.694319 kubelet[2859]: E1216 03:17:35.694028 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:35.694461 kubelet[2859]: E1216 03:17:35.694352 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:35.695934 kubelet[2859]: E1216 03:17:35.694501 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:36.243241 kubelet[2859]: I1216 03:17:36.243187 2859 apiserver.go:52] "Watching apiserver" Dec 16 03:17:36.260389 kubelet[2859]: I1216 03:17:36.260324 2859 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:17:36.297565 kubelet[2859]: I1216 03:17:36.297532 2859 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:36.297863 kubelet[2859]: I1216 03:17:36.297532 2859 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:17:36.297863 kubelet[2859]: I1216 03:17:36.297760 2859 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:36.305371 kubelet[2859]: E1216 03:17:36.305064 2859 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:17:36.305371 kubelet[2859]: E1216 03:17:36.305286 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:36.306342 kubelet[2859]: E1216 03:17:36.305826 2859 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 03:17:36.306342 kubelet[2859]: E1216 03:17:36.305929 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:36.306342 kubelet[2859]: E1216 03:17:36.306003 2859 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 03:17:36.306342 kubelet[2859]: E1216 03:17:36.306087 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:36.317519 kubelet[2859]: I1216 03:17:36.317366 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.317347886 podStartE2EDuration="1.317347886s" podCreationTimestamp="2025-12-16 03:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:17:36.317277303 +0000 UTC m=+1.138467674" watchObservedRunningTime="2025-12-16 03:17:36.317347886 +0000 UTC m=+1.138538257" Dec 16 03:17:36.325244 kubelet[2859]: I1216 03:17:36.325095 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.32508415 podStartE2EDuration="1.32508415s" podCreationTimestamp="2025-12-16 03:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:17:36.324907929 +0000 UTC m=+1.146098300" watchObservedRunningTime="2025-12-16 03:17:36.32508415 +0000 UTC m=+1.146274521" Dec 16 03:17:36.349983 kubelet[2859]: I1216 03:17:36.349163 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.349125121 podStartE2EDuration="1.349125121s" podCreationTimestamp="2025-12-16 03:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:17:36.332155333 +0000 UTC m=+1.153345704" watchObservedRunningTime="2025-12-16 03:17:36.349125121 +0000 UTC m=+1.170315492" Dec 16 03:17:37.298362 kubelet[2859]: E1216 03:17:37.298307 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:37.298362 kubelet[2859]: E1216 03:17:37.298373 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:37.298842 kubelet[2859]: E1216 03:17:37.298473 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:38.300327 kubelet[2859]: E1216 03:17:38.300284 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:38.300327 kubelet[2859]: E1216 03:17:38.300330 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:38.841639 kubelet[2859]: I1216 03:17:38.841518 2859 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:17:38.843998 containerd[1622]: time="2025-12-16T03:17:38.843057145Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:17:38.845399 kubelet[2859]: I1216 03:17:38.844776 2859 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:17:40.054871 systemd[1]: Created slice kubepods-besteffort-pod45170c0e_245d_44d8_8204_1c0cca23b8fa.slice - libcontainer container kubepods-besteffort-pod45170c0e_245d_44d8_8204_1c0cca23b8fa.slice. Dec 16 03:17:40.090600 kubelet[2859]: I1216 03:17:40.090537 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45170c0e-245d-44d8-8204-1c0cca23b8fa-lib-modules\") pod \"kube-proxy-2wzgx\" (UID: \"45170c0e-245d-44d8-8204-1c0cca23b8fa\") " pod="kube-system/kube-proxy-2wzgx" Dec 16 03:17:40.090600 kubelet[2859]: I1216 03:17:40.090605 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gvhb\" (UniqueName: \"kubernetes.io/projected/45170c0e-245d-44d8-8204-1c0cca23b8fa-kube-api-access-4gvhb\") pod \"kube-proxy-2wzgx\" (UID: \"45170c0e-245d-44d8-8204-1c0cca23b8fa\") " pod="kube-system/kube-proxy-2wzgx" Dec 16 03:17:40.091840 kubelet[2859]: I1216 03:17:40.090632 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45170c0e-245d-44d8-8204-1c0cca23b8fa-xtables-lock\") pod \"kube-proxy-2wzgx\" (UID: \"45170c0e-245d-44d8-8204-1c0cca23b8fa\") " pod="kube-system/kube-proxy-2wzgx" Dec 16 03:17:40.091840 kubelet[2859]: I1216 03:17:40.090656 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45170c0e-245d-44d8-8204-1c0cca23b8fa-kube-proxy\") pod \"kube-proxy-2wzgx\" (UID: \"45170c0e-245d-44d8-8204-1c0cca23b8fa\") " pod="kube-system/kube-proxy-2wzgx" Dec 16 03:17:40.104177 systemd[1]: Created slice kubepods-besteffort-pod47c010ec_6564_4f32_a1a3_51f3d4df3033.slice - libcontainer container kubepods-besteffort-pod47c010ec_6564_4f32_a1a3_51f3d4df3033.slice. Dec 16 03:17:40.191423 kubelet[2859]: I1216 03:17:40.190812 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47c010ec-6564-4f32-a1a3-51f3d4df3033-var-lib-calico\") pod \"tigera-operator-7dcd859c48-x9vvj\" (UID: \"47c010ec-6564-4f32-a1a3-51f3d4df3033\") " pod="tigera-operator/tigera-operator-7dcd859c48-x9vvj" Dec 16 03:17:40.191423 kubelet[2859]: I1216 03:17:40.191073 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kslr4\" (UniqueName: \"kubernetes.io/projected/47c010ec-6564-4f32-a1a3-51f3d4df3033-kube-api-access-kslr4\") pod \"tigera-operator-7dcd859c48-x9vvj\" (UID: \"47c010ec-6564-4f32-a1a3-51f3d4df3033\") " pod="tigera-operator/tigera-operator-7dcd859c48-x9vvj" Dec 16 03:17:40.362677 kubelet[2859]: E1216 03:17:40.362513 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:40.363573 containerd[1622]: time="2025-12-16T03:17:40.363499737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2wzgx,Uid:45170c0e-245d-44d8-8204-1c0cca23b8fa,Namespace:kube-system,Attempt:0,}" Dec 16 03:17:40.408886 containerd[1622]: time="2025-12-16T03:17:40.408829913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-x9vvj,Uid:47c010ec-6564-4f32-a1a3-51f3d4df3033,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:17:40.417892 containerd[1622]: time="2025-12-16T03:17:40.417824692Z" level=info msg="connecting to shim 5c557d3d08f9420fe010be42d1604ff01e58ce8b5cfd49980c8aef2ea9db72ff" address="unix:///run/containerd/s/1994b48f9ce4ff47dc29682411cff26e3b8c0ad6ce81393b9017ffa461e5573b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:40.439807 containerd[1622]: time="2025-12-16T03:17:40.439743120Z" level=info msg="connecting to shim 5651eeabd4d5a9d996acaacfe30d324e10b6b18e0c185f643be6f1519e31cfa1" address="unix:///run/containerd/s/b8637d0f5beb92ca37e35114a97782c2a41c3321ba6776a3668657093642fe40" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:40.486480 systemd[1]: Started cri-containerd-5651eeabd4d5a9d996acaacfe30d324e10b6b18e0c185f643be6f1519e31cfa1.scope - libcontainer container 5651eeabd4d5a9d996acaacfe30d324e10b6b18e0c185f643be6f1519e31cfa1. Dec 16 03:17:40.490466 systemd[1]: Started cri-containerd-5c557d3d08f9420fe010be42d1604ff01e58ce8b5cfd49980c8aef2ea9db72ff.scope - libcontainer container 5c557d3d08f9420fe010be42d1604ff01e58ce8b5cfd49980c8aef2ea9db72ff. Dec 16 03:17:40.505000 audit: BPF prog-id=131 op=LOAD Dec 16 03:17:40.508044 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 03:17:40.508103 kernel: audit: type=1334 audit(1765855060.505:428): prog-id=131 op=LOAD Dec 16 03:17:40.511120 kernel: audit: type=1334 audit(1765855060.507:429): prog-id=132 op=LOAD Dec 16 03:17:40.507000 audit: BPF prog-id=132 op=LOAD Dec 16 03:17:40.512803 kernel: audit: type=1334 audit(1765855060.507:430): prog-id=133 op=LOAD Dec 16 03:17:40.507000 audit: BPF prog-id=133 op=LOAD Dec 16 03:17:40.507000 audit[2962]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.519048 kernel: audit: type=1300 audit(1765855060.507:430): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.519205 kernel: audit: type=1327 audit(1765855060.507:430): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.524824 kernel: audit: type=1334 audit(1765855060.507:431): prog-id=133 op=UNLOAD Dec 16 03:17:40.507000 audit: BPF prog-id=133 op=UNLOAD Dec 16 03:17:40.507000 audit[2962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.532078 kernel: audit: type=1300 audit(1765855060.507:431): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.532161 kernel: audit: type=1327 audit(1765855060.507:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.537698 kernel: audit: type=1334 audit(1765855060.507:432): prog-id=134 op=LOAD Dec 16 03:17:40.507000 audit: BPF prog-id=134 op=LOAD Dec 16 03:17:40.507000 audit[2962]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.544535 kernel: audit: type=1300 audit(1765855060.507:432): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.507000 audit: BPF prog-id=135 op=LOAD Dec 16 03:17:40.507000 audit[2962]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.507000 audit: BPF prog-id=135 op=UNLOAD Dec 16 03:17:40.507000 audit[2962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.507000 audit: BPF prog-id=134 op=UNLOAD Dec 16 03:17:40.507000 audit[2962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.507000 audit: BPF prog-id=136 op=LOAD Dec 16 03:17:40.507000 audit[2962]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2950 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536353165656162643464356139643939366163616163666533306433 Dec 16 03:17:40.509000 audit: BPF prog-id=137 op=LOAD Dec 16 03:17:40.509000 audit[2942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2929 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353537643364303866393432306665303130626534326431363034 Dec 16 03:17:40.509000 audit: BPF prog-id=137 op=UNLOAD Dec 16 03:17:40.509000 audit[2942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353537643364303866393432306665303130626534326431363034 Dec 16 03:17:40.509000 audit: BPF prog-id=138 op=LOAD Dec 16 03:17:40.509000 audit[2942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2929 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353537643364303866393432306665303130626534326431363034 Dec 16 03:17:40.509000 audit: BPF prog-id=139 op=LOAD Dec 16 03:17:40.509000 audit[2942]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2929 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353537643364303866393432306665303130626534326431363034 Dec 16 03:17:40.509000 audit: BPF prog-id=139 op=UNLOAD Dec 16 03:17:40.509000 audit[2942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353537643364303866393432306665303130626534326431363034 Dec 16 03:17:40.509000 audit: BPF prog-id=138 op=UNLOAD Dec 16 03:17:40.509000 audit[2942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353537643364303866393432306665303130626534326431363034 Dec 16 03:17:40.509000 audit: BPF prog-id=140 op=LOAD Dec 16 03:17:40.509000 audit[2942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2929 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563353537643364303866393432306665303130626534326431363034 Dec 16 03:17:40.554481 containerd[1622]: time="2025-12-16T03:17:40.554418153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2wzgx,Uid:45170c0e-245d-44d8-8204-1c0cca23b8fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c557d3d08f9420fe010be42d1604ff01e58ce8b5cfd49980c8aef2ea9db72ff\"" Dec 16 03:17:40.555938 kubelet[2859]: E1216 03:17:40.555907 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:40.563190 containerd[1622]: time="2025-12-16T03:17:40.563137454Z" level=info msg="CreateContainer within sandbox \"5c557d3d08f9420fe010be42d1604ff01e58ce8b5cfd49980c8aef2ea9db72ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:17:40.564919 containerd[1622]: time="2025-12-16T03:17:40.564850596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-x9vvj,Uid:47c010ec-6564-4f32-a1a3-51f3d4df3033,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5651eeabd4d5a9d996acaacfe30d324e10b6b18e0c185f643be6f1519e31cfa1\"" Dec 16 03:17:40.566778 containerd[1622]: time="2025-12-16T03:17:40.566731704Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:17:40.580107 containerd[1622]: time="2025-12-16T03:17:40.580052099Z" level=info msg="Container 5bcae16d7896df8d343c0cdb844922f75892894b27e6977eb6d5376309810d80: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:40.591326 containerd[1622]: time="2025-12-16T03:17:40.591259732Z" level=info msg="CreateContainer within sandbox \"5c557d3d08f9420fe010be42d1604ff01e58ce8b5cfd49980c8aef2ea9db72ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5bcae16d7896df8d343c0cdb844922f75892894b27e6977eb6d5376309810d80\"" Dec 16 03:17:40.592186 containerd[1622]: time="2025-12-16T03:17:40.592151078Z" level=info msg="StartContainer for \"5bcae16d7896df8d343c0cdb844922f75892894b27e6977eb6d5376309810d80\"" Dec 16 03:17:40.593822 containerd[1622]: time="2025-12-16T03:17:40.593792515Z" level=info msg="connecting to shim 5bcae16d7896df8d343c0cdb844922f75892894b27e6977eb6d5376309810d80" address="unix:///run/containerd/s/1994b48f9ce4ff47dc29682411cff26e3b8c0ad6ce81393b9017ffa461e5573b" protocol=ttrpc version=3 Dec 16 03:17:40.623167 systemd[1]: Started cri-containerd-5bcae16d7896df8d343c0cdb844922f75892894b27e6977eb6d5376309810d80.scope - libcontainer container 5bcae16d7896df8d343c0cdb844922f75892894b27e6977eb6d5376309810d80. Dec 16 03:17:40.703000 audit: BPF prog-id=141 op=LOAD Dec 16 03:17:40.703000 audit[3012]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2929 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636165313664373839366466386433343363306364623834343932 Dec 16 03:17:40.703000 audit: BPF prog-id=142 op=LOAD Dec 16 03:17:40.703000 audit[3012]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2929 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636165313664373839366466386433343363306364623834343932 Dec 16 03:17:40.703000 audit: BPF prog-id=142 op=UNLOAD Dec 16 03:17:40.703000 audit[3012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636165313664373839366466386433343363306364623834343932 Dec 16 03:17:40.703000 audit: BPF prog-id=141 op=UNLOAD Dec 16 03:17:40.703000 audit[3012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636165313664373839366466386433343363306364623834343932 Dec 16 03:17:40.703000 audit: BPF prog-id=143 op=LOAD Dec 16 03:17:40.703000 audit[3012]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2929 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636165313664373839366466386433343363306364623834343932 Dec 16 03:17:40.726346 containerd[1622]: time="2025-12-16T03:17:40.726280062Z" level=info msg="StartContainer for \"5bcae16d7896df8d343c0cdb844922f75892894b27e6977eb6d5376309810d80\" returns successfully" Dec 16 03:17:40.890000 audit[3075]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:40.890000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc70b23030 a2=0 a3=7ffc70b2301c items=0 ppid=3025 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:17:40.891000 audit[3076]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:40.891000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdac152150 a2=0 a3=7ffdac15213c items=0 ppid=3025 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:17:40.893000 audit[3079]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:40.893000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffab414530 a2=0 a3=7fffab41451c items=0 ppid=3025 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:17:40.894000 audit[3078]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:40.894000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea2f945c0 a2=0 a3=7ffea2f945ac items=0 ppid=3025 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:17:40.896000 audit[3080]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:40.896000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc131aa770 a2=0 a3=7ffc131aa75c items=0 ppid=3025 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:17:40.897000 audit[3082]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:40.897000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff415851d0 a2=0 a3=7fff415851bc items=0 ppid=3025 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:17:40.993000 audit[3084]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:40.993000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc7270ef40 a2=0 a3=7ffc7270ef2c items=0 ppid=3025 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:17:40.997000 audit[3086]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:40.997000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffb4eda080 a2=0 a3=7fffb4eda06c items=0 ppid=3025 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:40.997000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 03:17:41.002000 audit[3089]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.002000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdc1cf1fe0 a2=0 a3=7ffdc1cf1fcc items=0 ppid=3025 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 03:17:41.004000 audit[3090]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.004000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef796afd0 a2=0 a3=7ffef796afbc items=0 ppid=3025 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:17:41.008000 audit[3092]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.008000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc7c740870 a2=0 a3=7ffc7c74085c items=0 ppid=3025 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:17:41.009000 audit[3093]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.009000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9599f900 a2=0 a3=7fff9599f8ec items=0 ppid=3025 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.009000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:17:41.013000 audit[3095]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.013000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffffbcfd830 a2=0 a3=7ffffbcfd81c items=0 ppid=3025 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:17:41.017000 audit[3098]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.017000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc12340050 a2=0 a3=7ffc1234003c items=0 ppid=3025 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 03:17:41.019000 audit[3099]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.019000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb13792d0 a2=0 a3=7ffeb13792bc items=0 ppid=3025 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:17:41.022000 audit[3101]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.022000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe64fe7dd0 a2=0 a3=7ffe64fe7dbc items=0 ppid=3025 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:17:41.024000 audit[3102]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.024000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc42d008f0 a2=0 a3=7ffc42d008dc items=0 ppid=3025 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:17:41.027000 audit[3104]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.027000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8b784de0 a2=0 a3=7fff8b784dcc items=0 ppid=3025 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:17:41.032000 audit[3107]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.032000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc2754e20 a2=0 a3=7ffdc2754e0c items=0 ppid=3025 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:17:41.037000 audit[3110]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.037000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed41212b0 a2=0 a3=7ffed412129c items=0 ppid=3025 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:17:41.039000 audit[3111]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.039000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea1a69e30 a2=0 a3=7ffea1a69e1c items=0 ppid=3025 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:17:41.042000 audit[3113]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.042000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd1cff9950 a2=0 a3=7ffd1cff993c items=0 ppid=3025 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:17:41.047000 audit[3116]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.047000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeef8cdcc0 a2=0 a3=7ffeef8cdcac items=0 ppid=3025 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.047000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:17:41.048000 audit[3117]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.048000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddf2185f0 a2=0 a3=7ffddf2185dc items=0 ppid=3025 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:17:41.052000 audit[3119]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:17:41.052000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffeccab0d00 a2=0 a3=7ffeccab0cec items=0 ppid=3025 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:17:41.157000 audit[3125]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:41.157000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeaee62de0 a2=0 a3=7ffeaee62dcc items=0 ppid=3025 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:41.169000 audit[3125]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:41.169000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffeaee62de0 a2=0 a3=7ffeaee62dcc items=0 ppid=3025 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:41.171000 audit[3130]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.171000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffd154fdb0 a2=0 a3=7fffd154fd9c items=0 ppid=3025 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:17:41.174000 audit[3132]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.174000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff4faf0340 a2=0 a3=7fff4faf032c items=0 ppid=3025 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 03:17:41.180000 audit[3135]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.180000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd6fc329a0 a2=0 a3=7ffd6fc3298c items=0 ppid=3025 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 03:17:41.181000 audit[3136]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.181000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda0c7f810 a2=0 a3=7ffda0c7f7fc items=0 ppid=3025 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:17:41.185000 audit[3138]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.185000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc18c74140 a2=0 a3=7ffc18c7412c items=0 ppid=3025 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:17:41.186000 audit[3139]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.186000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc851a4a80 a2=0 a3=7ffc851a4a6c items=0 ppid=3025 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:17:41.190000 audit[3141]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.190000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdc739de50 a2=0 a3=7ffdc739de3c items=0 ppid=3025 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 03:17:41.195000 audit[3144]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.195000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe5300dee0 a2=0 a3=7ffe5300decc items=0 ppid=3025 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:17:41.197000 audit[3145]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.197000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef0a943f0 a2=0 a3=7ffef0a943dc items=0 ppid=3025 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:17:41.200000 audit[3147]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.200000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc39ea3eb0 a2=0 a3=7ffc39ea3e9c items=0 ppid=3025 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:17:41.201000 audit[3148]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.201000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed45b87f0 a2=0 a3=7ffed45b87dc items=0 ppid=3025 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:17:41.205000 audit[3150]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.205000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcf1ca2d0 a2=0 a3=7fffcf1ca2bc items=0 ppid=3025 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:17:41.210000 audit[3153]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.210000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf9d94f90 a2=0 a3=7ffdf9d94f7c items=0 ppid=3025 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:17:41.215000 audit[3156]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.215000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6da9a570 a2=0 a3=7ffc6da9a55c items=0 ppid=3025 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.215000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 03:17:41.216000 audit[3157]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.216000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe0de24090 a2=0 a3=7ffe0de2407c items=0 ppid=3025 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:17:41.220000 audit[3159]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.220000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe2a3dd810 a2=0 a3=7ffe2a3dd7fc items=0 ppid=3025 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:17:41.227000 audit[3162]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.227000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3fa8d940 a2=0 a3=7ffc3fa8d92c items=0 ppid=3025 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:17:41.228000 audit[3163]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.228000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0689bce0 a2=0 a3=7fff0689bccc items=0 ppid=3025 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.228000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:17:41.232000 audit[3165]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.232000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd0247c3c0 a2=0 a3=7ffd0247c3ac items=0 ppid=3025 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:17:41.234000 audit[3166]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.234000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdc9524d0 a2=0 a3=7ffcdc9524bc items=0 ppid=3025 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:17:41.239000 audit[3168]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.239000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb3bdf1d0 a2=0 a3=7fffb3bdf1bc items=0 ppid=3025 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:17:41.245000 audit[3171]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:17:41.245000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff33c7e250 a2=0 a3=7fff33c7e23c items=0 ppid=3025 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:17:41.251000 audit[3173]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:17:41.251000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe0d313d00 a2=0 a3=7ffe0d313cec items=0 ppid=3025 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.251000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:41.251000 audit[3173]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:17:41.251000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe0d313d00 a2=0 a3=7ffe0d313cec items=0 ppid=3025 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:41.251000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:41.310208 kubelet[2859]: E1216 03:17:41.310178 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:41.489935 kubelet[2859]: I1216 03:17:41.489742 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2wzgx" podStartSLOduration=2.48972353 podStartE2EDuration="2.48972353s" podCreationTimestamp="2025-12-16 03:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:17:41.489591501 +0000 UTC m=+6.310781902" watchObservedRunningTime="2025-12-16 03:17:41.48972353 +0000 UTC m=+6.310913901" Dec 16 03:17:42.613510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217282966.mount: Deactivated successfully. Dec 16 03:17:43.382012 kubelet[2859]: E1216 03:17:43.381943 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:44.315315 kubelet[2859]: E1216 03:17:44.315206 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:44.887883 containerd[1622]: time="2025-12-16T03:17:44.887771036Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:44.889377 containerd[1622]: time="2025-12-16T03:17:44.889197729Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 03:17:44.892004 containerd[1622]: time="2025-12-16T03:17:44.891909245Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:44.895086 containerd[1622]: time="2025-12-16T03:17:44.895022037Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:44.895850 containerd[1622]: time="2025-12-16T03:17:44.895770403Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.328995708s" Dec 16 03:17:44.895904 containerd[1622]: time="2025-12-16T03:17:44.895853188Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:17:44.902500 containerd[1622]: time="2025-12-16T03:17:44.902425813Z" level=info msg="CreateContainer within sandbox \"5651eeabd4d5a9d996acaacfe30d324e10b6b18e0c185f643be6f1519e31cfa1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:17:44.917743 containerd[1622]: time="2025-12-16T03:17:44.917678815Z" level=info msg="Container c483678a20f46cbb42e30617a9307b2c52bf9139e39138cbf9dea06fc980b3ca: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:44.928682 containerd[1622]: time="2025-12-16T03:17:44.928619950Z" level=info msg="CreateContainer within sandbox \"5651eeabd4d5a9d996acaacfe30d324e10b6b18e0c185f643be6f1519e31cfa1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c483678a20f46cbb42e30617a9307b2c52bf9139e39138cbf9dea06fc980b3ca\"" Dec 16 03:17:44.930610 containerd[1622]: time="2025-12-16T03:17:44.929331538Z" level=info msg="StartContainer for \"c483678a20f46cbb42e30617a9307b2c52bf9139e39138cbf9dea06fc980b3ca\"" Dec 16 03:17:44.930610 containerd[1622]: time="2025-12-16T03:17:44.930322591Z" level=info msg="connecting to shim c483678a20f46cbb42e30617a9307b2c52bf9139e39138cbf9dea06fc980b3ca" address="unix:///run/containerd/s/b8637d0f5beb92ca37e35114a97782c2a41c3321ba6776a3668657093642fe40" protocol=ttrpc version=3 Dec 16 03:17:44.953345 systemd[1]: Started cri-containerd-c483678a20f46cbb42e30617a9307b2c52bf9139e39138cbf9dea06fc980b3ca.scope - libcontainer container c483678a20f46cbb42e30617a9307b2c52bf9139e39138cbf9dea06fc980b3ca. Dec 16 03:17:44.966000 audit: BPF prog-id=144 op=LOAD Dec 16 03:17:44.967000 audit: BPF prog-id=145 op=LOAD Dec 16 03:17:44.967000 audit[3182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2950 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:44.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383336373861323066343663626234326533303631376139333037 Dec 16 03:17:44.967000 audit: BPF prog-id=145 op=UNLOAD Dec 16 03:17:44.967000 audit[3182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2950 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:44.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383336373861323066343663626234326533303631376139333037 Dec 16 03:17:44.967000 audit: BPF prog-id=146 op=LOAD Dec 16 03:17:44.967000 audit[3182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2950 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:44.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383336373861323066343663626234326533303631376139333037 Dec 16 03:17:44.967000 audit: BPF prog-id=147 op=LOAD Dec 16 03:17:44.967000 audit[3182]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2950 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:44.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383336373861323066343663626234326533303631376139333037 Dec 16 03:17:44.967000 audit: BPF prog-id=147 op=UNLOAD Dec 16 03:17:44.967000 audit[3182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2950 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:44.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383336373861323066343663626234326533303631376139333037 Dec 16 03:17:44.967000 audit: BPF prog-id=146 op=UNLOAD Dec 16 03:17:44.967000 audit[3182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2950 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:44.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383336373861323066343663626234326533303631376139333037 Dec 16 03:17:44.967000 audit: BPF prog-id=148 op=LOAD Dec 16 03:17:44.967000 audit[3182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2950 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:44.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383336373861323066343663626234326533303631376139333037 Dec 16 03:17:44.990579 containerd[1622]: time="2025-12-16T03:17:44.990458680Z" level=info msg="StartContainer for \"c483678a20f46cbb42e30617a9307b2c52bf9139e39138cbf9dea06fc980b3ca\" returns successfully" Dec 16 03:17:45.332370 kubelet[2859]: I1216 03:17:45.332288 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-x9vvj" podStartSLOduration=1.001870828 podStartE2EDuration="5.332268541s" podCreationTimestamp="2025-12-16 03:17:40 +0000 UTC" firstStartedPulling="2025-12-16 03:17:40.566473359 +0000 UTC m=+5.387663730" lastFinishedPulling="2025-12-16 03:17:44.896871072 +0000 UTC m=+9.718061443" observedRunningTime="2025-12-16 03:17:45.332162572 +0000 UTC m=+10.153352943" watchObservedRunningTime="2025-12-16 03:17:45.332268541 +0000 UTC m=+10.153458912" Dec 16 03:17:46.713798 kubelet[2859]: E1216 03:17:46.713745 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:47.457511 kubelet[2859]: E1216 03:17:47.457455 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:50.819063 sudo[1848]: pam_unix(sudo:session): session closed for user root Dec 16 03:17:50.825344 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 03:17:50.825414 kernel: audit: type=1106 audit(1765855070.818:508): pid=1848 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:17:50.818000 audit[1848]: USER_END pid=1848 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:17:50.825524 sshd[1847]: Connection closed by 10.0.0.1 port 54698 Dec 16 03:17:50.826374 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Dec 16 03:17:50.818000 audit[1848]: CRED_DISP pid=1848 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:17:50.839431 kernel: audit: type=1104 audit(1765855070.818:509): pid=1848 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:17:50.839489 kernel: audit: type=1106 audit(1765855070.832:510): pid=1843 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:17:50.832000 audit[1843]: USER_END pid=1843 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:17:50.841147 kernel: audit: type=1104 audit(1765855070.832:511): pid=1843 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:17:50.832000 audit[1843]: CRED_DISP pid=1843 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:17:50.839860 systemd-logind[1600]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:17:50.841236 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:54698.service: Deactivated successfully. Dec 16 03:17:50.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:54698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:50.849646 kernel: audit: type=1131 audit(1765855070.839:512): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:54698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:17:50.850814 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:17:50.851138 systemd[1]: session-8.scope: Consumed 7.224s CPU time, 217.6M memory peak. Dec 16 03:17:50.853920 systemd-logind[1600]: Removed session 8. Dec 16 03:17:51.181000 audit[3276]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:51.186994 kernel: audit: type=1325 audit(1765855071.181:513): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:51.181000 audit[3276]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe7d5ff6a0 a2=0 a3=7ffe7d5ff68c items=0 ppid=3025 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:51.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:51.197034 kernel: audit: type=1300 audit(1765855071.181:513): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe7d5ff6a0 a2=0 a3=7ffe7d5ff68c items=0 ppid=3025 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:51.197121 kernel: audit: type=1327 audit(1765855071.181:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:51.197145 kernel: audit: type=1325 audit(1765855071.193:514): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:51.193000 audit[3276]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:51.193000 audit[3276]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7d5ff6a0 a2=0 a3=0 items=0 ppid=3025 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:51.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:51.206031 kernel: audit: type=1300 audit(1765855071.193:514): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7d5ff6a0 a2=0 a3=0 items=0 ppid=3025 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:51.217000 audit[3278]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:51.217000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd66402200 a2=0 a3=7ffd664021ec items=0 ppid=3025 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:51.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:51.222000 audit[3278]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:51.222000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd66402200 a2=0 a3=0 items=0 ppid=3025 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:51.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:53.324000 audit[3280]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:53.324000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe51a34140 a2=0 a3=7ffe51a3412c items=0 ppid=3025 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:53.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:53.331000 audit[3280]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:53.331000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe51a34140 a2=0 a3=0 items=0 ppid=3025 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:53.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:53.383000 audit[3282]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:53.383000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffca4d1dc20 a2=0 a3=7ffca4d1dc0c items=0 ppid=3025 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:53.383000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:53.389000 audit[3282]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:53.389000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca4d1dc20 a2=0 a3=0 items=0 ppid=3025 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:53.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:54.403000 audit[3285]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:54.403000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff659f75b0 a2=0 a3=7fff659f759c items=0 ppid=3025 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:54.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:54.408000 audit[3285]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:54.408000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff659f75b0 a2=0 a3=0 items=0 ppid=3025 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:54.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:55.042257 systemd[1]: Created slice kubepods-besteffort-pod9e9abb7a_f8cd_4fcd_972c_85d9e71e3b71.slice - libcontainer container kubepods-besteffort-pod9e9abb7a_f8cd_4fcd_972c_85d9e71e3b71.slice. Dec 16 03:17:55.092075 kubelet[2859]: I1216 03:17:55.092017 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e9abb7a-f8cd-4fcd-972c-85d9e71e3b71-tigera-ca-bundle\") pod \"calico-typha-6879855c8b-zk6kh\" (UID: \"9e9abb7a-f8cd-4fcd-972c-85d9e71e3b71\") " pod="calico-system/calico-typha-6879855c8b-zk6kh" Dec 16 03:17:55.092075 kubelet[2859]: I1216 03:17:55.092070 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9e9abb7a-f8cd-4fcd-972c-85d9e71e3b71-typha-certs\") pod \"calico-typha-6879855c8b-zk6kh\" (UID: \"9e9abb7a-f8cd-4fcd-972c-85d9e71e3b71\") " pod="calico-system/calico-typha-6879855c8b-zk6kh" Dec 16 03:17:55.092531 kubelet[2859]: I1216 03:17:55.092122 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnht6\" (UniqueName: \"kubernetes.io/projected/9e9abb7a-f8cd-4fcd-972c-85d9e71e3b71-kube-api-access-qnht6\") pod \"calico-typha-6879855c8b-zk6kh\" (UID: \"9e9abb7a-f8cd-4fcd-972c-85d9e71e3b71\") " pod="calico-system/calico-typha-6879855c8b-zk6kh" Dec 16 03:17:55.106263 systemd[1]: Created slice kubepods-besteffort-pod5e0a5e4d_93c8_4552_a16c_51818de444d4.slice - libcontainer container kubepods-besteffort-pod5e0a5e4d_93c8_4552_a16c_51818de444d4.slice. Dec 16 03:17:55.192612 kubelet[2859]: I1216 03:17:55.192542 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-lib-modules\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192612 kubelet[2859]: I1216 03:17:55.192596 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e0a5e4d-93c8-4552-a16c-51818de444d4-tigera-ca-bundle\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192612 kubelet[2859]: I1216 03:17:55.192617 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-var-lib-calico\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192819 kubelet[2859]: I1216 03:17:55.192636 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-var-run-calico\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192819 kubelet[2859]: I1216 03:17:55.192703 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-cni-bin-dir\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192819 kubelet[2859]: I1216 03:17:55.192727 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwv55\" (UniqueName: \"kubernetes.io/projected/5e0a5e4d-93c8-4552-a16c-51818de444d4-kube-api-access-hwv55\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192819 kubelet[2859]: I1216 03:17:55.192752 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-cni-log-dir\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192819 kubelet[2859]: I1216 03:17:55.192788 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5e0a5e4d-93c8-4552-a16c-51818de444d4-node-certs\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192941 kubelet[2859]: I1216 03:17:55.192818 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-flexvol-driver-host\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.192941 kubelet[2859]: I1216 03:17:55.192836 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-xtables-lock\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.193046 kubelet[2859]: I1216 03:17:55.192993 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-policysync\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.193074 kubelet[2859]: I1216 03:17:55.193060 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5e0a5e4d-93c8-4552-a16c-51818de444d4-cni-net-dir\") pod \"calico-node-vqhw4\" (UID: \"5e0a5e4d-93c8-4552-a16c-51818de444d4\") " pod="calico-system/calico-node-vqhw4" Dec 16 03:17:55.297280 kubelet[2859]: E1216 03:17:55.296078 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.297280 kubelet[2859]: W1216 03:17:55.296108 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.297280 kubelet[2859]: E1216 03:17:55.296132 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.298996 kubelet[2859]: E1216 03:17:55.298843 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.298996 kubelet[2859]: W1216 03:17:55.298902 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.298996 kubelet[2859]: E1216 03:17:55.298929 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.300413 kubelet[2859]: E1216 03:17:55.300366 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:17:55.312935 kubelet[2859]: E1216 03:17:55.312902 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.312935 kubelet[2859]: W1216 03:17:55.312926 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.313129 kubelet[2859]: E1216 03:17:55.312944 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.350020 kubelet[2859]: E1216 03:17:55.349940 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:55.350588 containerd[1622]: time="2025-12-16T03:17:55.350540951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6879855c8b-zk6kh,Uid:9e9abb7a-f8cd-4fcd-972c-85d9e71e3b71,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:55.382418 containerd[1622]: time="2025-12-16T03:17:55.382352186Z" level=info msg="connecting to shim 81075585457cd3bbc66179ab40386e0d7f31f6d9cdef844d12ab9c3cdaa6f0de" address="unix:///run/containerd/s/196e65cb1896e6bcac5ad27ea24858247b7c35fd63694d6380a74c39995cb82e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:55.383246 kubelet[2859]: E1216 03:17:55.383212 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.383410 kubelet[2859]: W1216 03:17:55.383384 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.383668 kubelet[2859]: E1216 03:17:55.383617 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.384306 kubelet[2859]: E1216 03:17:55.384269 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.384388 kubelet[2859]: W1216 03:17:55.384371 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.384483 kubelet[2859]: E1216 03:17:55.384458 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.385040 kubelet[2859]: E1216 03:17:55.384877 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.385040 kubelet[2859]: W1216 03:17:55.384890 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.385040 kubelet[2859]: E1216 03:17:55.384901 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.386136 kubelet[2859]: E1216 03:17:55.386121 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.386273 kubelet[2859]: W1216 03:17:55.386187 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.386273 kubelet[2859]: E1216 03:17:55.386204 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.386615 kubelet[2859]: E1216 03:17:55.386600 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.386768 kubelet[2859]: W1216 03:17:55.386666 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.386768 kubelet[2859]: E1216 03:17:55.386680 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.386898 kubelet[2859]: E1216 03:17:55.386886 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.387047 kubelet[2859]: W1216 03:17:55.387035 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.387098 kubelet[2859]: E1216 03:17:55.387089 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.388308 kubelet[2859]: E1216 03:17:55.387827 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.388308 kubelet[2859]: W1216 03:17:55.387840 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.388308 kubelet[2859]: E1216 03:17:55.387849 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.388672 kubelet[2859]: E1216 03:17:55.388660 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.388736 kubelet[2859]: W1216 03:17:55.388725 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.388789 kubelet[2859]: E1216 03:17:55.388777 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.389114 kubelet[2859]: E1216 03:17:55.389098 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.389183 kubelet[2859]: W1216 03:17:55.389172 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.389248 kubelet[2859]: E1216 03:17:55.389237 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.389542 kubelet[2859]: E1216 03:17:55.389453 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.389542 kubelet[2859]: W1216 03:17:55.389463 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.389542 kubelet[2859]: E1216 03:17:55.389472 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.389745 kubelet[2859]: E1216 03:17:55.389733 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.389802 kubelet[2859]: W1216 03:17:55.389792 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.389857 kubelet[2859]: E1216 03:17:55.389847 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.390183 kubelet[2859]: E1216 03:17:55.390088 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.390183 kubelet[2859]: W1216 03:17:55.390098 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.390183 kubelet[2859]: E1216 03:17:55.390106 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.390325 kubelet[2859]: E1216 03:17:55.390314 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.390373 kubelet[2859]: W1216 03:17:55.390363 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.390496 kubelet[2859]: E1216 03:17:55.390418 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.390692 kubelet[2859]: E1216 03:17:55.390673 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.390764 kubelet[2859]: W1216 03:17:55.390737 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.390890 kubelet[2859]: E1216 03:17:55.390812 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.391071 kubelet[2859]: E1216 03:17:55.391060 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.391131 kubelet[2859]: W1216 03:17:55.391121 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.391241 kubelet[2859]: E1216 03:17:55.391188 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.391625 kubelet[2859]: E1216 03:17:55.391611 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.391753 kubelet[2859]: W1216 03:17:55.391688 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.391753 kubelet[2859]: E1216 03:17:55.391702 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.392095 kubelet[2859]: E1216 03:17:55.391988 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.392095 kubelet[2859]: W1216 03:17:55.392009 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.392095 kubelet[2859]: E1216 03:17:55.392018 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.392288 kubelet[2859]: E1216 03:17:55.392277 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.392338 kubelet[2859]: W1216 03:17:55.392328 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.392383 kubelet[2859]: E1216 03:17:55.392374 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.392639 kubelet[2859]: E1216 03:17:55.392586 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.392639 kubelet[2859]: W1216 03:17:55.392596 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.392639 kubelet[2859]: E1216 03:17:55.392605 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.392986 kubelet[2859]: E1216 03:17:55.392868 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.392986 kubelet[2859]: W1216 03:17:55.392886 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.392986 kubelet[2859]: E1216 03:17:55.392895 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.394964 kubelet[2859]: E1216 03:17:55.394859 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.395198 kubelet[2859]: W1216 03:17:55.395101 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.395569 kubelet[2859]: E1216 03:17:55.395548 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.396150 kubelet[2859]: I1216 03:17:55.395671 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/296bf1a5-e8cc-4c0a-a28b-66e55ce1e788-varrun\") pod \"csi-node-driver-p7kjv\" (UID: \"296bf1a5-e8cc-4c0a-a28b-66e55ce1e788\") " pod="calico-system/csi-node-driver-p7kjv" Dec 16 03:17:55.397447 kubelet[2859]: E1216 03:17:55.397057 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.397447 kubelet[2859]: W1216 03:17:55.397079 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.397447 kubelet[2859]: E1216 03:17:55.397097 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.398484 kubelet[2859]: E1216 03:17:55.398250 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.398484 kubelet[2859]: W1216 03:17:55.398470 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.398757 kubelet[2859]: E1216 03:17:55.398496 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.399984 kubelet[2859]: E1216 03:17:55.399822 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.399984 kubelet[2859]: W1216 03:17:55.399840 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.399984 kubelet[2859]: E1216 03:17:55.399854 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.400118 kubelet[2859]: I1216 03:17:55.400071 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wc4\" (UniqueName: \"kubernetes.io/projected/296bf1a5-e8cc-4c0a-a28b-66e55ce1e788-kube-api-access-s8wc4\") pod \"csi-node-driver-p7kjv\" (UID: \"296bf1a5-e8cc-4c0a-a28b-66e55ce1e788\") " pod="calico-system/csi-node-driver-p7kjv" Dec 16 03:17:55.400725 kubelet[2859]: E1216 03:17:55.400691 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.400725 kubelet[2859]: W1216 03:17:55.400707 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.400725 kubelet[2859]: E1216 03:17:55.400717 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.401478 kubelet[2859]: E1216 03:17:55.400999 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.401478 kubelet[2859]: W1216 03:17:55.401012 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.401478 kubelet[2859]: E1216 03:17:55.401020 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.401478 kubelet[2859]: E1216 03:17:55.401223 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.401478 kubelet[2859]: W1216 03:17:55.401231 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.401478 kubelet[2859]: E1216 03:17:55.401238 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.401478 kubelet[2859]: I1216 03:17:55.401264 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/296bf1a5-e8cc-4c0a-a28b-66e55ce1e788-kubelet-dir\") pod \"csi-node-driver-p7kjv\" (UID: \"296bf1a5-e8cc-4c0a-a28b-66e55ce1e788\") " pod="calico-system/csi-node-driver-p7kjv" Dec 16 03:17:55.401478 kubelet[2859]: E1216 03:17:55.401466 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.401478 kubelet[2859]: W1216 03:17:55.401475 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.401885 kubelet[2859]: E1216 03:17:55.401483 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.401885 kubelet[2859]: I1216 03:17:55.401507 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/296bf1a5-e8cc-4c0a-a28b-66e55ce1e788-registration-dir\") pod \"csi-node-driver-p7kjv\" (UID: \"296bf1a5-e8cc-4c0a-a28b-66e55ce1e788\") " pod="calico-system/csi-node-driver-p7kjv" Dec 16 03:17:55.401885 kubelet[2859]: E1216 03:17:55.401723 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.401885 kubelet[2859]: W1216 03:17:55.401731 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.401885 kubelet[2859]: E1216 03:17:55.401739 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.401885 kubelet[2859]: I1216 03:17:55.401763 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/296bf1a5-e8cc-4c0a-a28b-66e55ce1e788-socket-dir\") pod \"csi-node-driver-p7kjv\" (UID: \"296bf1a5-e8cc-4c0a-a28b-66e55ce1e788\") " pod="calico-system/csi-node-driver-p7kjv" Dec 16 03:17:55.402151 kubelet[2859]: E1216 03:17:55.401995 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.402151 kubelet[2859]: W1216 03:17:55.402004 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.402151 kubelet[2859]: E1216 03:17:55.402014 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.402291 kubelet[2859]: E1216 03:17:55.402227 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.402291 kubelet[2859]: W1216 03:17:55.402235 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.402291 kubelet[2859]: E1216 03:17:55.402243 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.402291 kubelet[2859]: E1216 03:17:55.402448 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.402291 kubelet[2859]: W1216 03:17:55.402455 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.402291 kubelet[2859]: E1216 03:17:55.402463 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.402291 kubelet[2859]: E1216 03:17:55.402648 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.402291 kubelet[2859]: W1216 03:17:55.402655 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.402291 kubelet[2859]: E1216 03:17:55.402664 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.402291 kubelet[2859]: E1216 03:17:55.402853 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.405461 kubelet[2859]: W1216 03:17:55.402861 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.405461 kubelet[2859]: E1216 03:17:55.402868 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.405461 kubelet[2859]: E1216 03:17:55.403113 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.405461 kubelet[2859]: W1216 03:17:55.403122 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.405461 kubelet[2859]: E1216 03:17:55.403130 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.408393 kubelet[2859]: E1216 03:17:55.408295 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:55.411612 containerd[1622]: time="2025-12-16T03:17:55.411561447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vqhw4,Uid:5e0a5e4d-93c8-4552-a16c-51818de444d4,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:55.415222 systemd[1]: Started cri-containerd-81075585457cd3bbc66179ab40386e0d7f31f6d9cdef844d12ab9c3cdaa6f0de.scope - libcontainer container 81075585457cd3bbc66179ab40386e0d7f31f6d9cdef844d12ab9c3cdaa6f0de. Dec 16 03:17:55.432000 audit[3372]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:55.432000 audit[3372]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea5efabd0 a2=0 a3=7ffea5efabbc items=0 ppid=3025 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:55.434000 audit: BPF prog-id=149 op=LOAD Dec 16 03:17:55.434000 audit: BPF prog-id=150 op=LOAD Dec 16 03:17:55.434000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3307 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.434000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303735353835343537636433626263363631373961623430333836 Dec 16 03:17:55.435000 audit: BPF prog-id=150 op=UNLOAD Dec 16 03:17:55.435000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303735353835343537636433626263363631373961623430333836 Dec 16 03:17:55.435000 audit: BPF prog-id=151 op=LOAD Dec 16 03:17:55.435000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3307 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303735353835343537636433626263363631373961623430333836 Dec 16 03:17:55.435000 audit: BPF prog-id=152 op=LOAD Dec 16 03:17:55.435000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3307 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303735353835343537636433626263363631373961623430333836 Dec 16 03:17:55.435000 audit: BPF prog-id=152 op=UNLOAD Dec 16 03:17:55.435000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303735353835343537636433626263363631373961623430333836 Dec 16 03:17:55.435000 audit: BPF prog-id=151 op=UNLOAD Dec 16 03:17:55.435000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303735353835343537636433626263363631373961623430333836 Dec 16 03:17:55.435000 audit: BPF prog-id=153 op=LOAD Dec 16 03:17:55.435000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3307 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303735353835343537636433626263363631373961623430333836 Dec 16 03:17:55.437000 audit[3372]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:55.437000 audit[3372]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea5efabd0 a2=0 a3=0 items=0 ppid=3025 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:55.445856 containerd[1622]: time="2025-12-16T03:17:55.445715210Z" level=info msg="connecting to shim db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340" address="unix:///run/containerd/s/a1dd17bf2ac0d61c1537d6fd8b1b9923367a4540d20b356341a5f758b26df165" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:55.470198 systemd[1]: Started cri-containerd-db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340.scope - libcontainer container db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340. Dec 16 03:17:55.479041 containerd[1622]: time="2025-12-16T03:17:55.478840904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6879855c8b-zk6kh,Uid:9e9abb7a-f8cd-4fcd-972c-85d9e71e3b71,Namespace:calico-system,Attempt:0,} returns sandbox id \"81075585457cd3bbc66179ab40386e0d7f31f6d9cdef844d12ab9c3cdaa6f0de\"" Dec 16 03:17:55.483000 audit: BPF prog-id=154 op=LOAD Dec 16 03:17:55.484407 kubelet[2859]: E1216 03:17:55.484388 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:55.483000 audit: BPF prog-id=155 op=LOAD Dec 16 03:17:55.483000 audit[3392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3381 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462376666303762363231353865656138663432313465356263356638 Dec 16 03:17:55.483000 audit: BPF prog-id=155 op=UNLOAD Dec 16 03:17:55.483000 audit[3392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462376666303762363231353865656138663432313465356263356638 Dec 16 03:17:55.484000 audit: BPF prog-id=156 op=LOAD Dec 16 03:17:55.484000 audit[3392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3381 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462376666303762363231353865656138663432313465356263356638 Dec 16 03:17:55.484000 audit: BPF prog-id=157 op=LOAD Dec 16 03:17:55.484000 audit[3392]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3381 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462376666303762363231353865656138663432313465356263356638 Dec 16 03:17:55.484000 audit: BPF prog-id=157 op=UNLOAD Dec 16 03:17:55.484000 audit[3392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462376666303762363231353865656138663432313465356263356638 Dec 16 03:17:55.484000 audit: BPF prog-id=156 op=UNLOAD Dec 16 03:17:55.484000 audit[3392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462376666303762363231353865656138663432313465356263356638 Dec 16 03:17:55.484000 audit: BPF prog-id=158 op=LOAD Dec 16 03:17:55.484000 audit[3392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3381 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:55.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462376666303762363231353865656138663432313465356263356638 Dec 16 03:17:55.489219 containerd[1622]: time="2025-12-16T03:17:55.489047457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:17:55.501457 containerd[1622]: time="2025-12-16T03:17:55.501414315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vqhw4,Uid:5e0a5e4d-93c8-4552-a16c-51818de444d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340\"" Dec 16 03:17:55.502309 kubelet[2859]: E1216 03:17:55.502288 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.502309 kubelet[2859]: W1216 03:17:55.502307 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.502383 kubelet[2859]: E1216 03:17:55.502323 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.502405 kubelet[2859]: E1216 03:17:55.502397 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:55.502909 kubelet[2859]: E1216 03:17:55.502893 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.502909 kubelet[2859]: W1216 03:17:55.502903 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.503094 kubelet[2859]: E1216 03:17:55.502912 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.503602 kubelet[2859]: E1216 03:17:55.503589 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.503602 kubelet[2859]: W1216 03:17:55.503599 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.503654 kubelet[2859]: E1216 03:17:55.503610 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.503843 kubelet[2859]: E1216 03:17:55.503831 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.503843 kubelet[2859]: W1216 03:17:55.503840 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.503902 kubelet[2859]: E1216 03:17:55.503849 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.504115 kubelet[2859]: E1216 03:17:55.504102 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.504115 kubelet[2859]: W1216 03:17:55.504113 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.504171 kubelet[2859]: E1216 03:17:55.504122 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.504490 kubelet[2859]: E1216 03:17:55.504470 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.504490 kubelet[2859]: W1216 03:17:55.504481 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.504541 kubelet[2859]: E1216 03:17:55.504490 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.504734 kubelet[2859]: E1216 03:17:55.504717 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.504734 kubelet[2859]: W1216 03:17:55.504727 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.504781 kubelet[2859]: E1216 03:17:55.504736 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.504936 kubelet[2859]: E1216 03:17:55.504924 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.504936 kubelet[2859]: W1216 03:17:55.504933 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.505015 kubelet[2859]: E1216 03:17:55.504941 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.505279 kubelet[2859]: E1216 03:17:55.505260 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.505279 kubelet[2859]: W1216 03:17:55.505271 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.505322 kubelet[2859]: E1216 03:17:55.505279 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.505502 kubelet[2859]: E1216 03:17:55.505486 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.505502 kubelet[2859]: W1216 03:17:55.505495 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.505550 kubelet[2859]: E1216 03:17:55.505524 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.505809 kubelet[2859]: E1216 03:17:55.505791 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.505809 kubelet[2859]: W1216 03:17:55.505801 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.505850 kubelet[2859]: E1216 03:17:55.505810 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.506980 kubelet[2859]: E1216 03:17:55.506312 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.506980 kubelet[2859]: W1216 03:17:55.506322 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.506980 kubelet[2859]: E1216 03:17:55.506330 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.506980 kubelet[2859]: E1216 03:17:55.506514 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.506980 kubelet[2859]: W1216 03:17:55.506521 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.506980 kubelet[2859]: E1216 03:17:55.506528 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.506980 kubelet[2859]: E1216 03:17:55.506737 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.506980 kubelet[2859]: W1216 03:17:55.506744 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.506980 kubelet[2859]: E1216 03:17:55.506753 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.506980 kubelet[2859]: E1216 03:17:55.506983 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.507207 kubelet[2859]: W1216 03:17:55.507008 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.507207 kubelet[2859]: E1216 03:17:55.507019 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.507247 kubelet[2859]: E1216 03:17:55.507228 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.507247 kubelet[2859]: W1216 03:17:55.507235 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.507247 kubelet[2859]: E1216 03:17:55.507243 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.507468 kubelet[2859]: E1216 03:17:55.507451 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.507468 kubelet[2859]: W1216 03:17:55.507461 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.507468 kubelet[2859]: E1216 03:17:55.507469 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.507728 kubelet[2859]: E1216 03:17:55.507707 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.507728 kubelet[2859]: W1216 03:17:55.507719 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.507728 kubelet[2859]: E1216 03:17:55.507727 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.509576 kubelet[2859]: E1216 03:17:55.509559 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.509576 kubelet[2859]: W1216 03:17:55.509570 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.509637 kubelet[2859]: E1216 03:17:55.509580 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.509813 kubelet[2859]: E1216 03:17:55.509790 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.509813 kubelet[2859]: W1216 03:17:55.509799 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.509813 kubelet[2859]: E1216 03:17:55.509807 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.510262 kubelet[2859]: E1216 03:17:55.510221 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.510262 kubelet[2859]: W1216 03:17:55.510252 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.510345 kubelet[2859]: E1216 03:17:55.510276 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.510537 kubelet[2859]: E1216 03:17:55.510521 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.510537 kubelet[2859]: W1216 03:17:55.510534 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.510586 kubelet[2859]: E1216 03:17:55.510544 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.510823 kubelet[2859]: E1216 03:17:55.510801 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.510823 kubelet[2859]: W1216 03:17:55.510811 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.510823 kubelet[2859]: E1216 03:17:55.510819 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.511093 kubelet[2859]: E1216 03:17:55.511072 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.511093 kubelet[2859]: W1216 03:17:55.511081 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.511093 kubelet[2859]: E1216 03:17:55.511089 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.511369 kubelet[2859]: E1216 03:17:55.511349 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.511369 kubelet[2859]: W1216 03:17:55.511357 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.511369 kubelet[2859]: E1216 03:17:55.511365 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:55.518484 kubelet[2859]: E1216 03:17:55.518469 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:55.518484 kubelet[2859]: W1216 03:17:55.518480 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:55.518484 kubelet[2859]: E1216 03:17:55.518489 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:56.905990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257969666.mount: Deactivated successfully. Dec 16 03:17:57.286367 kubelet[2859]: E1216 03:17:57.286296 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:17:57.396195 containerd[1622]: time="2025-12-16T03:17:57.396147051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:57.397084 containerd[1622]: time="2025-12-16T03:17:57.397007777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 03:17:57.398316 containerd[1622]: time="2025-12-16T03:17:57.398265197Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:57.401401 containerd[1622]: time="2025-12-16T03:17:57.401365898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:57.402193 containerd[1622]: time="2025-12-16T03:17:57.402168285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.912942073s" Dec 16 03:17:57.402262 containerd[1622]: time="2025-12-16T03:17:57.402197279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:17:57.403115 containerd[1622]: time="2025-12-16T03:17:57.403089233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:17:57.414790 containerd[1622]: time="2025-12-16T03:17:57.414734063Z" level=info msg="CreateContainer within sandbox \"81075585457cd3bbc66179ab40386e0d7f31f6d9cdef844d12ab9c3cdaa6f0de\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:17:57.423462 containerd[1622]: time="2025-12-16T03:17:57.423419358Z" level=info msg="Container 9a4c9c20ff054ae72c697e6f56d1405b8eaf66cce63399a8644a2535c326bfb6: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:57.430606 containerd[1622]: time="2025-12-16T03:17:57.430567787Z" level=info msg="CreateContainer within sandbox \"81075585457cd3bbc66179ab40386e0d7f31f6d9cdef844d12ab9c3cdaa6f0de\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9a4c9c20ff054ae72c697e6f56d1405b8eaf66cce63399a8644a2535c326bfb6\"" Dec 16 03:17:57.431445 containerd[1622]: time="2025-12-16T03:17:57.431407013Z" level=info msg="StartContainer for \"9a4c9c20ff054ae72c697e6f56d1405b8eaf66cce63399a8644a2535c326bfb6\"" Dec 16 03:17:57.432703 containerd[1622]: time="2025-12-16T03:17:57.432675233Z" level=info msg="connecting to shim 9a4c9c20ff054ae72c697e6f56d1405b8eaf66cce63399a8644a2535c326bfb6" address="unix:///run/containerd/s/196e65cb1896e6bcac5ad27ea24858247b7c35fd63694d6380a74c39995cb82e" protocol=ttrpc version=3 Dec 16 03:17:57.455133 systemd[1]: Started cri-containerd-9a4c9c20ff054ae72c697e6f56d1405b8eaf66cce63399a8644a2535c326bfb6.scope - libcontainer container 9a4c9c20ff054ae72c697e6f56d1405b8eaf66cce63399a8644a2535c326bfb6. Dec 16 03:17:57.469000 audit: BPF prog-id=159 op=LOAD Dec 16 03:17:57.471597 kernel: kauditd_printk_skb: 75 callbacks suppressed Dec 16 03:17:57.471675 kernel: audit: type=1334 audit(1765855077.469:541): prog-id=159 op=LOAD Dec 16 03:17:57.469000 audit: BPF prog-id=160 op=LOAD Dec 16 03:17:57.474705 kernel: audit: type=1334 audit(1765855077.469:542): prog-id=160 op=LOAD Dec 16 03:17:57.474752 kernel: audit: type=1300 audit(1765855077.469:542): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.469000 audit[3462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.486433 kernel: audit: type=1327 audit(1765855077.469:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.486511 kernel: audit: type=1334 audit(1765855077.469:543): prog-id=160 op=UNLOAD Dec 16 03:17:57.469000 audit: BPF prog-id=160 op=UNLOAD Dec 16 03:17:57.469000 audit[3462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.492965 kernel: audit: type=1300 audit(1765855077.469:543): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.493055 kernel: audit: type=1327 audit(1765855077.469:543): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.470000 audit: BPF prog-id=161 op=LOAD Dec 16 03:17:57.499363 kernel: audit: type=1334 audit(1765855077.470:544): prog-id=161 op=LOAD Dec 16 03:17:57.499446 kernel: audit: type=1300 audit(1765855077.470:544): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.470000 audit[3462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.505091 kernel: audit: type=1327 audit(1765855077.470:544): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.470000 audit: BPF prog-id=162 op=LOAD Dec 16 03:17:57.470000 audit[3462]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.470000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:17:57.470000 audit[3462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.470000 audit: BPF prog-id=161 op=UNLOAD Dec 16 03:17:57.470000 audit[3462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.470000 audit: BPF prog-id=163 op=LOAD Dec 16 03:17:57.470000 audit[3462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3307 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:57.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346339633230666630353461653732633639376536663536643134 Dec 16 03:17:57.518259 containerd[1622]: time="2025-12-16T03:17:57.518200765Z" level=info msg="StartContainer for \"9a4c9c20ff054ae72c697e6f56d1405b8eaf66cce63399a8644a2535c326bfb6\" returns successfully" Dec 16 03:17:58.347338 kubelet[2859]: E1216 03:17:58.347305 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:58.358073 kubelet[2859]: I1216 03:17:58.357994 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6879855c8b-zk6kh" podStartSLOduration=2.440562732 podStartE2EDuration="4.357973965s" podCreationTimestamp="2025-12-16 03:17:54 +0000 UTC" firstStartedPulling="2025-12-16 03:17:55.485606977 +0000 UTC m=+20.306797348" lastFinishedPulling="2025-12-16 03:17:57.40301821 +0000 UTC m=+22.224208581" observedRunningTime="2025-12-16 03:17:58.356273033 +0000 UTC m=+23.177463414" watchObservedRunningTime="2025-12-16 03:17:58.357973965 +0000 UTC m=+23.179164346" Dec 16 03:17:58.414269 kubelet[2859]: E1216 03:17:58.414217 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.414269 kubelet[2859]: W1216 03:17:58.414245 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.414269 kubelet[2859]: E1216 03:17:58.414271 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.414491 kubelet[2859]: E1216 03:17:58.414444 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.414491 kubelet[2859]: W1216 03:17:58.414451 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.414491 kubelet[2859]: E1216 03:17:58.414459 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.414632 kubelet[2859]: E1216 03:17:58.414615 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.414632 kubelet[2859]: W1216 03:17:58.414625 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.414632 kubelet[2859]: E1216 03:17:58.414632 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.414857 kubelet[2859]: E1216 03:17:58.414826 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.414857 kubelet[2859]: W1216 03:17:58.414837 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.414857 kubelet[2859]: E1216 03:17:58.414845 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.415163 kubelet[2859]: E1216 03:17:58.415123 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.415163 kubelet[2859]: W1216 03:17:58.415151 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.415251 kubelet[2859]: E1216 03:17:58.415182 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.415520 kubelet[2859]: E1216 03:17:58.415485 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.415520 kubelet[2859]: W1216 03:17:58.415497 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.415520 kubelet[2859]: E1216 03:17:58.415508 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.415739 kubelet[2859]: E1216 03:17:58.415722 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.415739 kubelet[2859]: W1216 03:17:58.415733 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.415843 kubelet[2859]: E1216 03:17:58.415744 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.416020 kubelet[2859]: E1216 03:17:58.416000 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.416020 kubelet[2859]: W1216 03:17:58.416011 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.416020 kubelet[2859]: E1216 03:17:58.416021 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.416327 kubelet[2859]: E1216 03:17:58.416297 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.416327 kubelet[2859]: W1216 03:17:58.416311 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.416327 kubelet[2859]: E1216 03:17:58.416321 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.416584 kubelet[2859]: E1216 03:17:58.416544 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.416584 kubelet[2859]: W1216 03:17:58.416556 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.416584 kubelet[2859]: E1216 03:17:58.416569 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.416853 kubelet[2859]: E1216 03:17:58.416824 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.416853 kubelet[2859]: W1216 03:17:58.416841 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.416853 kubelet[2859]: E1216 03:17:58.416851 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.417085 kubelet[2859]: E1216 03:17:58.417073 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.417085 kubelet[2859]: W1216 03:17:58.417081 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.417136 kubelet[2859]: E1216 03:17:58.417089 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.417324 kubelet[2859]: E1216 03:17:58.417310 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.417349 kubelet[2859]: W1216 03:17:58.417321 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.417349 kubelet[2859]: E1216 03:17:58.417333 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.417563 kubelet[2859]: E1216 03:17:58.417540 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.417563 kubelet[2859]: W1216 03:17:58.417552 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.417563 kubelet[2859]: E1216 03:17:58.417559 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.417815 kubelet[2859]: E1216 03:17:58.417791 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.417815 kubelet[2859]: W1216 03:17:58.417808 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.417922 kubelet[2859]: E1216 03:17:58.417822 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.422164 kubelet[2859]: E1216 03:17:58.422108 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.422164 kubelet[2859]: W1216 03:17:58.422149 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.422164 kubelet[2859]: E1216 03:17:58.422160 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.422358 kubelet[2859]: E1216 03:17:58.422343 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.422401 kubelet[2859]: W1216 03:17:58.422358 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.422401 kubelet[2859]: E1216 03:17:58.422377 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.422600 kubelet[2859]: E1216 03:17:58.422588 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.422600 kubelet[2859]: W1216 03:17:58.422598 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.422664 kubelet[2859]: E1216 03:17:58.422606 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.422840 kubelet[2859]: E1216 03:17:58.422829 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.422840 kubelet[2859]: W1216 03:17:58.422837 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.422889 kubelet[2859]: E1216 03:17:58.422846 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.423049 kubelet[2859]: E1216 03:17:58.423036 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.423049 kubelet[2859]: W1216 03:17:58.423045 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.423108 kubelet[2859]: E1216 03:17:58.423053 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.423243 kubelet[2859]: E1216 03:17:58.423230 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.423243 kubelet[2859]: W1216 03:17:58.423239 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.423356 kubelet[2859]: E1216 03:17:58.423246 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.423477 kubelet[2859]: E1216 03:17:58.423464 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.423477 kubelet[2859]: W1216 03:17:58.423473 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.423524 kubelet[2859]: E1216 03:17:58.423483 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.423857 kubelet[2859]: E1216 03:17:58.423827 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.423857 kubelet[2859]: W1216 03:17:58.423845 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.423927 kubelet[2859]: E1216 03:17:58.423857 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.424108 kubelet[2859]: E1216 03:17:58.424090 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.424108 kubelet[2859]: W1216 03:17:58.424104 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.424168 kubelet[2859]: E1216 03:17:58.424117 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.424337 kubelet[2859]: E1216 03:17:58.424317 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.424337 kubelet[2859]: W1216 03:17:58.424329 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.424418 kubelet[2859]: E1216 03:17:58.424339 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.424538 kubelet[2859]: E1216 03:17:58.424518 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.424538 kubelet[2859]: W1216 03:17:58.424530 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.424538 kubelet[2859]: E1216 03:17:58.424539 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.424766 kubelet[2859]: E1216 03:17:58.424746 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.424766 kubelet[2859]: W1216 03:17:58.424758 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.424846 kubelet[2859]: E1216 03:17:58.424768 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.425054 kubelet[2859]: E1216 03:17:58.425032 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.425054 kubelet[2859]: W1216 03:17:58.425046 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.425054 kubelet[2859]: E1216 03:17:58.425056 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.425352 kubelet[2859]: E1216 03:17:58.425332 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.425352 kubelet[2859]: W1216 03:17:58.425346 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.425425 kubelet[2859]: E1216 03:17:58.425358 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.425547 kubelet[2859]: E1216 03:17:58.425531 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.425547 kubelet[2859]: W1216 03:17:58.425540 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.425547 kubelet[2859]: E1216 03:17:58.425547 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.425791 kubelet[2859]: E1216 03:17:58.425766 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.425791 kubelet[2859]: W1216 03:17:58.425777 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.425791 kubelet[2859]: E1216 03:17:58.425785 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.426379 kubelet[2859]: E1216 03:17:58.426226 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.426379 kubelet[2859]: W1216 03:17:58.426257 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.426379 kubelet[2859]: E1216 03:17:58.426271 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.426640 kubelet[2859]: E1216 03:17:58.426616 2859 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:17:58.426640 kubelet[2859]: W1216 03:17:58.426630 2859 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:17:58.426640 kubelet[2859]: E1216 03:17:58.426641 2859 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:17:58.652834 containerd[1622]: time="2025-12-16T03:17:58.652704349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:58.654213 containerd[1622]: time="2025-12-16T03:17:58.654097564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:58.655742 containerd[1622]: time="2025-12-16T03:17:58.655704090Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:58.657766 containerd[1622]: time="2025-12-16T03:17:58.657724192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:58.658372 containerd[1622]: time="2025-12-16T03:17:58.658330751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.25521588s" Dec 16 03:17:58.658418 containerd[1622]: time="2025-12-16T03:17:58.658372619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:17:58.662501 containerd[1622]: time="2025-12-16T03:17:58.662455142Z" level=info msg="CreateContainer within sandbox \"db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:17:58.674703 containerd[1622]: time="2025-12-16T03:17:58.674629235Z" level=info msg="Container bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:58.684506 containerd[1622]: time="2025-12-16T03:17:58.684450891Z" level=info msg="CreateContainer within sandbox \"db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34\"" Dec 16 03:17:58.685398 containerd[1622]: time="2025-12-16T03:17:58.685327747Z" level=info msg="StartContainer for \"bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34\"" Dec 16 03:17:58.686897 containerd[1622]: time="2025-12-16T03:17:58.686863419Z" level=info msg="connecting to shim bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34" address="unix:///run/containerd/s/a1dd17bf2ac0d61c1537d6fd8b1b9923367a4540d20b356341a5f758b26df165" protocol=ttrpc version=3 Dec 16 03:17:58.721261 systemd[1]: Started cri-containerd-bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34.scope - libcontainer container bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34. Dec 16 03:17:58.784000 audit: BPF prog-id=164 op=LOAD Dec 16 03:17:58.784000 audit[3538]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3381 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:58.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265613563643136653336393233343637396462623835323633316236 Dec 16 03:17:58.784000 audit: BPF prog-id=165 op=LOAD Dec 16 03:17:58.784000 audit[3538]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3381 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:58.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265613563643136653336393233343637396462623835323633316236 Dec 16 03:17:58.784000 audit: BPF prog-id=165 op=UNLOAD Dec 16 03:17:58.784000 audit[3538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:58.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265613563643136653336393233343637396462623835323633316236 Dec 16 03:17:58.784000 audit: BPF prog-id=164 op=UNLOAD Dec 16 03:17:58.784000 audit[3538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:58.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265613563643136653336393233343637396462623835323633316236 Dec 16 03:17:58.784000 audit: BPF prog-id=166 op=LOAD Dec 16 03:17:58.784000 audit[3538]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3381 pid=3538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:58.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265613563643136653336393233343637396462623835323633316236 Dec 16 03:17:58.806193 containerd[1622]: time="2025-12-16T03:17:58.806138553Z" level=info msg="StartContainer for \"bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34\" returns successfully" Dec 16 03:17:58.819017 systemd[1]: cri-containerd-bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34.scope: Deactivated successfully. Dec 16 03:17:58.823354 containerd[1622]: time="2025-12-16T03:17:58.823293896Z" level=info msg="received container exit event container_id:\"bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34\" id:\"bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34\" pid:3552 exited_at:{seconds:1765855078 nanos:822774831}" Dec 16 03:17:58.826000 audit: BPF prog-id=166 op=UNLOAD Dec 16 03:17:58.860212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bea5cd16e369234679dbb852631b60cf0ab7a4f30fef64d060892ce364668e34-rootfs.mount: Deactivated successfully. Dec 16 03:17:59.285110 kubelet[2859]: E1216 03:17:59.285016 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:17:59.351586 kubelet[2859]: I1216 03:17:59.351442 2859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:17:59.486371 kubelet[2859]: E1216 03:17:59.485846 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:17:59.486371 kubelet[2859]: E1216 03:17:59.485885 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:00.355435 kubelet[2859]: E1216 03:18:00.355392 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:00.356582 containerd[1622]: time="2025-12-16T03:18:00.356063651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:18:01.287699 kubelet[2859]: E1216 03:18:01.287652 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:18:02.739011 containerd[1622]: time="2025-12-16T03:18:02.738963394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:18:02.740148 containerd[1622]: time="2025-12-16T03:18:02.739970174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:18:02.741482 containerd[1622]: time="2025-12-16T03:18:02.741437857Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:18:02.743658 containerd[1622]: time="2025-12-16T03:18:02.743612458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:18:02.744065 containerd[1622]: time="2025-12-16T03:18:02.744035092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.387926817s" Dec 16 03:18:02.744065 containerd[1622]: time="2025-12-16T03:18:02.744062734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:18:02.747605 containerd[1622]: time="2025-12-16T03:18:02.747560158Z" level=info msg="CreateContainer within sandbox \"db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:18:02.757023 containerd[1622]: time="2025-12-16T03:18:02.756974927Z" level=info msg="Container 704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:18:02.765821 containerd[1622]: time="2025-12-16T03:18:02.765768490Z" level=info msg="CreateContainer within sandbox \"db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49\"" Dec 16 03:18:02.766469 containerd[1622]: time="2025-12-16T03:18:02.766408681Z" level=info msg="StartContainer for \"704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49\"" Dec 16 03:18:02.768428 containerd[1622]: time="2025-12-16T03:18:02.768399907Z" level=info msg="connecting to shim 704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49" address="unix:///run/containerd/s/a1dd17bf2ac0d61c1537d6fd8b1b9923367a4540d20b356341a5f758b26df165" protocol=ttrpc version=3 Dec 16 03:18:02.794141 systemd[1]: Started cri-containerd-704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49.scope - libcontainer container 704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49. Dec 16 03:18:02.865000 audit: BPF prog-id=167 op=LOAD Dec 16 03:18:02.868003 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 03:18:02.868099 kernel: audit: type=1334 audit(1765855082.865:555): prog-id=167 op=LOAD Dec 16 03:18:02.865000 audit[3598]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3381 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:02.875005 kernel: audit: type=1300 audit(1765855082.865:555): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3381 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:02.875071 kernel: audit: type=1327 audit(1765855082.865:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730346434363832333837643061373133383231643731366237646261 Dec 16 03:18:02.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730346434363832333837643061373133383231643731366237646261 Dec 16 03:18:02.865000 audit: BPF prog-id=168 op=LOAD Dec 16 03:18:02.881792 kernel: audit: type=1334 audit(1765855082.865:556): prog-id=168 op=LOAD Dec 16 03:18:02.881850 kernel: audit: type=1300 audit(1765855082.865:556): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3381 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:02.865000 audit[3598]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3381 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:02.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730346434363832333837643061373133383231643731366237646261 Dec 16 03:18:02.895976 kernel: audit: type=1327 audit(1765855082.865:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730346434363832333837643061373133383231643731366237646261 Dec 16 03:18:02.865000 audit: BPF prog-id=168 op=UNLOAD Dec 16 03:18:02.865000 audit[3598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:02.903559 kernel: audit: type=1334 audit(1765855082.865:557): prog-id=168 op=UNLOAD Dec 16 03:18:02.903609 kernel: audit: type=1300 audit(1765855082.865:557): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:02.903649 kernel: audit: type=1327 audit(1765855082.865:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730346434363832333837643061373133383231643731366237646261 Dec 16 03:18:02.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730346434363832333837643061373133383231643731366237646261 Dec 16 03:18:02.905249 containerd[1622]: time="2025-12-16T03:18:02.905207357Z" level=info msg="StartContainer for \"704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49\" returns successfully" Dec 16 03:18:02.865000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:18:02.911163 kernel: audit: type=1334 audit(1765855082.865:558): prog-id=167 op=UNLOAD Dec 16 03:18:02.865000 audit[3598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:02.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730346434363832333837643061373133383231643731366237646261 Dec 16 03:18:02.865000 audit: BPF prog-id=169 op=LOAD Dec 16 03:18:02.865000 audit[3598]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3381 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:02.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730346434363832333837643061373133383231643731366237646261 Dec 16 03:18:03.285235 kubelet[2859]: E1216 03:18:03.285172 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:18:03.367080 kubelet[2859]: E1216 03:18:03.366597 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:03.923170 systemd[1]: cri-containerd-704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49.scope: Deactivated successfully. Dec 16 03:18:03.923545 systemd[1]: cri-containerd-704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49.scope: Consumed 626ms CPU time, 172.6M memory peak, 4M read from disk, 171.3M written to disk. Dec 16 03:18:03.926000 audit: BPF prog-id=169 op=UNLOAD Dec 16 03:18:03.928315 containerd[1622]: time="2025-12-16T03:18:03.928260250Z" level=info msg="received container exit event container_id:\"704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49\" id:\"704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49\" pid:3610 exited_at:{seconds:1765855083 nanos:924151990}" Dec 16 03:18:03.958114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-704d4682387d0a713821d716b7dba0bf52eda0d91e7674478a2f1d8643a3da49-rootfs.mount: Deactivated successfully. Dec 16 03:18:03.977524 kubelet[2859]: I1216 03:18:03.977493 2859 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:18:04.562556 kubelet[2859]: E1216 03:18:04.562331 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:04.568905 systemd[1]: Created slice kubepods-besteffort-pod9063305a_9611_4527_b78f_50cf768d4751.slice - libcontainer container kubepods-besteffort-pod9063305a_9611_4527_b78f_50cf768d4751.slice. Dec 16 03:18:04.593612 systemd[1]: Created slice kubepods-besteffort-pod10d0064d_4592_4240_a31b_0b629652a239.slice - libcontainer container kubepods-besteffort-pod10d0064d_4592_4240_a31b_0b629652a239.slice. Dec 16 03:18:04.605449 systemd[1]: Created slice kubepods-burstable-pode0e61ffb_fcc3_4b0c_98ff_1e34abeff548.slice - libcontainer container kubepods-burstable-pode0e61ffb_fcc3_4b0c_98ff_1e34abeff548.slice. Dec 16 03:18:04.612144 systemd[1]: Created slice kubepods-besteffort-pod9ce85cb3_b361_4315_9638_617f5ef6e8f1.slice - libcontainer container kubepods-besteffort-pod9ce85cb3_b361_4315_9638_617f5ef6e8f1.slice. Dec 16 03:18:04.620640 systemd[1]: Created slice kubepods-besteffort-pod8653bc32_7498_477b_85bb_6a749d44ed95.slice - libcontainer container kubepods-besteffort-pod8653bc32_7498_477b_85bb_6a749d44ed95.slice. Dec 16 03:18:04.628098 systemd[1]: Created slice kubepods-burstable-podfd1a984e_51b4_4231_b86e_dec6833958fb.slice - libcontainer container kubepods-burstable-podfd1a984e_51b4_4231_b86e_dec6833958fb.slice. Dec 16 03:18:04.635497 systemd[1]: Created slice kubepods-besteffort-pod04744c4e_2de2_49b5_8dbb_8f920ac6f068.slice - libcontainer container kubepods-besteffort-pod04744c4e_2de2_49b5_8dbb_8f920ac6f068.slice. Dec 16 03:18:04.641817 systemd[1]: Created slice kubepods-besteffort-podb257371e_206f_4ea1_9c38_a798a97242ea.slice - libcontainer container kubepods-besteffort-podb257371e_206f_4ea1_9c38_a798a97242ea.slice. Dec 16 03:18:04.720500 kubelet[2859]: I1216 03:18:04.720392 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqqc\" (UniqueName: \"kubernetes.io/projected/fd1a984e-51b4-4231-b86e-dec6833958fb-kube-api-access-2zqqc\") pod \"coredns-674b8bbfcf-7rq44\" (UID: \"fd1a984e-51b4-4231-b86e-dec6833958fb\") " pod="kube-system/coredns-674b8bbfcf-7rq44" Dec 16 03:18:04.720500 kubelet[2859]: I1216 03:18:04.720443 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1a984e-51b4-4231-b86e-dec6833958fb-config-volume\") pod \"coredns-674b8bbfcf-7rq44\" (UID: \"fd1a984e-51b4-4231-b86e-dec6833958fb\") " pod="kube-system/coredns-674b8bbfcf-7rq44" Dec 16 03:18:04.720500 kubelet[2859]: I1216 03:18:04.720463 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9063305a-9611-4527-b78f-50cf768d4751-calico-apiserver-certs\") pod \"calico-apiserver-84f875f69d-hv8xn\" (UID: \"9063305a-9611-4527-b78f-50cf768d4751\") " pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" Dec 16 03:18:04.720500 kubelet[2859]: I1216 03:18:04.720480 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ce85cb3-b361-4315-9638-617f5ef6e8f1-whisker-ca-bundle\") pod \"whisker-7957f58bff-stqnx\" (UID: \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\") " pod="calico-system/whisker-7957f58bff-stqnx" Dec 16 03:18:04.720500 kubelet[2859]: I1216 03:18:04.720499 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d0064d-4592-4240-a31b-0b629652a239-config\") pod \"goldmane-666569f655-fml4k\" (UID: \"10d0064d-4592-4240-a31b-0b629652a239\") " pod="calico-system/goldmane-666569f655-fml4k" Dec 16 03:18:04.721142 kubelet[2859]: I1216 03:18:04.720516 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txz4k\" (UniqueName: \"kubernetes.io/projected/b257371e-206f-4ea1-9c38-a798a97242ea-kube-api-access-txz4k\") pod \"calico-apiserver-84f875f69d-gkvpq\" (UID: \"b257371e-206f-4ea1-9c38-a798a97242ea\") " pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" Dec 16 03:18:04.721142 kubelet[2859]: I1216 03:18:04.720532 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04744c4e-2de2-49b5-8dbb-8f920ac6f068-tigera-ca-bundle\") pod \"calico-kube-controllers-6d5cb44968-bmmvj\" (UID: \"04744c4e-2de2-49b5-8dbb-8f920ac6f068\") " pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" Dec 16 03:18:04.721142 kubelet[2859]: I1216 03:18:04.720551 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dtnt\" (UniqueName: \"kubernetes.io/projected/9063305a-9611-4527-b78f-50cf768d4751-kube-api-access-5dtnt\") pod \"calico-apiserver-84f875f69d-hv8xn\" (UID: \"9063305a-9611-4527-b78f-50cf768d4751\") " pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" Dec 16 03:18:04.721142 kubelet[2859]: I1216 03:18:04.720567 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ce85cb3-b361-4315-9638-617f5ef6e8f1-whisker-backend-key-pair\") pod \"whisker-7957f58bff-stqnx\" (UID: \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\") " pod="calico-system/whisker-7957f58bff-stqnx" Dec 16 03:18:04.721142 kubelet[2859]: I1216 03:18:04.720642 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0e61ffb-fcc3-4b0c-98ff-1e34abeff548-config-volume\") pod \"coredns-674b8bbfcf-m4q4f\" (UID: \"e0e61ffb-fcc3-4b0c-98ff-1e34abeff548\") " pod="kube-system/coredns-674b8bbfcf-m4q4f" Dec 16 03:18:04.721425 kubelet[2859]: I1216 03:18:04.720711 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4mg2\" (UniqueName: \"kubernetes.io/projected/e0e61ffb-fcc3-4b0c-98ff-1e34abeff548-kube-api-access-m4mg2\") pod \"coredns-674b8bbfcf-m4q4f\" (UID: \"e0e61ffb-fcc3-4b0c-98ff-1e34abeff548\") " pod="kube-system/coredns-674b8bbfcf-m4q4f" Dec 16 03:18:04.721425 kubelet[2859]: I1216 03:18:04.720834 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10d0064d-4592-4240-a31b-0b629652a239-goldmane-ca-bundle\") pod \"goldmane-666569f655-fml4k\" (UID: \"10d0064d-4592-4240-a31b-0b629652a239\") " pod="calico-system/goldmane-666569f655-fml4k" Dec 16 03:18:04.721425 kubelet[2859]: I1216 03:18:04.720993 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhvc\" (UniqueName: \"kubernetes.io/projected/8653bc32-7498-477b-85bb-6a749d44ed95-kube-api-access-rhhvc\") pod \"calico-apiserver-689f849d56-bpgq2\" (UID: \"8653bc32-7498-477b-85bb-6a749d44ed95\") " pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" Dec 16 03:18:04.721425 kubelet[2859]: I1216 03:18:04.721031 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/10d0064d-4592-4240-a31b-0b629652a239-goldmane-key-pair\") pod \"goldmane-666569f655-fml4k\" (UID: \"10d0064d-4592-4240-a31b-0b629652a239\") " pod="calico-system/goldmane-666569f655-fml4k" Dec 16 03:18:04.721425 kubelet[2859]: I1216 03:18:04.721123 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg9dj\" (UniqueName: \"kubernetes.io/projected/04744c4e-2de2-49b5-8dbb-8f920ac6f068-kube-api-access-lg9dj\") pod \"calico-kube-controllers-6d5cb44968-bmmvj\" (UID: \"04744c4e-2de2-49b5-8dbb-8f920ac6f068\") " pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" Dec 16 03:18:04.721580 kubelet[2859]: I1216 03:18:04.721164 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mxh7\" (UniqueName: \"kubernetes.io/projected/9ce85cb3-b361-4315-9638-617f5ef6e8f1-kube-api-access-6mxh7\") pod \"whisker-7957f58bff-stqnx\" (UID: \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\") " pod="calico-system/whisker-7957f58bff-stqnx" Dec 16 03:18:04.721580 kubelet[2859]: I1216 03:18:04.721199 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ssht\" (UniqueName: \"kubernetes.io/projected/10d0064d-4592-4240-a31b-0b629652a239-kube-api-access-2ssht\") pod \"goldmane-666569f655-fml4k\" (UID: \"10d0064d-4592-4240-a31b-0b629652a239\") " pod="calico-system/goldmane-666569f655-fml4k" Dec 16 03:18:04.721580 kubelet[2859]: I1216 03:18:04.721237 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b257371e-206f-4ea1-9c38-a798a97242ea-calico-apiserver-certs\") pod \"calico-apiserver-84f875f69d-gkvpq\" (UID: \"b257371e-206f-4ea1-9c38-a798a97242ea\") " pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" Dec 16 03:18:04.721580 kubelet[2859]: I1216 03:18:04.721297 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8653bc32-7498-477b-85bb-6a749d44ed95-calico-apiserver-certs\") pod \"calico-apiserver-689f849d56-bpgq2\" (UID: \"8653bc32-7498-477b-85bb-6a749d44ed95\") " pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" Dec 16 03:18:04.896025 containerd[1622]: time="2025-12-16T03:18:04.895877932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f875f69d-hv8xn,Uid:9063305a-9611-4527-b78f-50cf768d4751,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:18:04.903583 containerd[1622]: time="2025-12-16T03:18:04.903548717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fml4k,Uid:10d0064d-4592-4240-a31b-0b629652a239,Namespace:calico-system,Attempt:0,}" Dec 16 03:18:04.909848 kubelet[2859]: E1216 03:18:04.909809 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:04.910161 containerd[1622]: time="2025-12-16T03:18:04.910112623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4q4f,Uid:e0e61ffb-fcc3-4b0c-98ff-1e34abeff548,Namespace:kube-system,Attempt:0,}" Dec 16 03:18:04.915916 containerd[1622]: time="2025-12-16T03:18:04.915858496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7957f58bff-stqnx,Uid:9ce85cb3-b361-4315-9638-617f5ef6e8f1,Namespace:calico-system,Attempt:0,}" Dec 16 03:18:04.924556 containerd[1622]: time="2025-12-16T03:18:04.924524008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689f849d56-bpgq2,Uid:8653bc32-7498-477b-85bb-6a749d44ed95,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:18:04.931811 kubelet[2859]: E1216 03:18:04.931788 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:04.932096 containerd[1622]: time="2025-12-16T03:18:04.932068996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7rq44,Uid:fd1a984e-51b4-4231-b86e-dec6833958fb,Namespace:kube-system,Attempt:0,}" Dec 16 03:18:04.940228 containerd[1622]: time="2025-12-16T03:18:04.940195255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d5cb44968-bmmvj,Uid:04744c4e-2de2-49b5-8dbb-8f920ac6f068,Namespace:calico-system,Attempt:0,}" Dec 16 03:18:04.945679 containerd[1622]: time="2025-12-16T03:18:04.945646805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f875f69d-gkvpq,Uid:b257371e-206f-4ea1-9c38-a798a97242ea,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:18:05.294313 systemd[1]: Created slice kubepods-besteffort-pod296bf1a5_e8cc_4c0a_a28b_66e55ce1e788.slice - libcontainer container kubepods-besteffort-pod296bf1a5_e8cc_4c0a_a28b_66e55ce1e788.slice. Dec 16 03:18:05.296897 containerd[1622]: time="2025-12-16T03:18:05.296857649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p7kjv,Uid:296bf1a5-e8cc-4c0a-a28b-66e55ce1e788,Namespace:calico-system,Attempt:0,}" Dec 16 03:18:05.449776 kubelet[2859]: E1216 03:18:05.449714 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:05.450796 containerd[1622]: time="2025-12-16T03:18:05.450734636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:18:05.632506 containerd[1622]: time="2025-12-16T03:18:05.632219599Z" level=error msg="Failed to destroy network for sandbox \"f3260f1eedf602396df0ef4f1b8878fd5d0dd20ca34c44f0d0d1e4a30b2b64bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.640028 containerd[1622]: time="2025-12-16T03:18:05.639740602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fml4k,Uid:10d0064d-4592-4240-a31b-0b629652a239,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3260f1eedf602396df0ef4f1b8878fd5d0dd20ca34c44f0d0d1e4a30b2b64bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.640240 kubelet[2859]: E1216 03:18:05.640179 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3260f1eedf602396df0ef4f1b8878fd5d0dd20ca34c44f0d0d1e4a30b2b64bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.640577 kubelet[2859]: E1216 03:18:05.640282 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3260f1eedf602396df0ef4f1b8878fd5d0dd20ca34c44f0d0d1e4a30b2b64bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fml4k" Dec 16 03:18:05.640577 kubelet[2859]: E1216 03:18:05.640329 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3260f1eedf602396df0ef4f1b8878fd5d0dd20ca34c44f0d0d1e4a30b2b64bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fml4k" Dec 16 03:18:05.642421 kubelet[2859]: E1216 03:18:05.640403 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fml4k_calico-system(10d0064d-4592-4240-a31b-0b629652a239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fml4k_calico-system(10d0064d-4592-4240-a31b-0b629652a239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3260f1eedf602396df0ef4f1b8878fd5d0dd20ca34c44f0d0d1e4a30b2b64bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:18:05.696540 containerd[1622]: time="2025-12-16T03:18:05.696408228Z" level=error msg="Failed to destroy network for sandbox \"98bebfda80c9d4e176e7910abc523bfe5a2543432b99fe25c5b200a62740f23a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.701270 containerd[1622]: time="2025-12-16T03:18:05.701213244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7rq44,Uid:fd1a984e-51b4-4231-b86e-dec6833958fb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98bebfda80c9d4e176e7910abc523bfe5a2543432b99fe25c5b200a62740f23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.701967 kubelet[2859]: E1216 03:18:05.701734 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98bebfda80c9d4e176e7910abc523bfe5a2543432b99fe25c5b200a62740f23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.701967 kubelet[2859]: E1216 03:18:05.701827 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98bebfda80c9d4e176e7910abc523bfe5a2543432b99fe25c5b200a62740f23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7rq44" Dec 16 03:18:05.701967 kubelet[2859]: E1216 03:18:05.701854 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98bebfda80c9d4e176e7910abc523bfe5a2543432b99fe25c5b200a62740f23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7rq44" Dec 16 03:18:05.702079 kubelet[2859]: E1216 03:18:05.701904 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7rq44_kube-system(fd1a984e-51b4-4231-b86e-dec6833958fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7rq44_kube-system(fd1a984e-51b4-4231-b86e-dec6833958fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98bebfda80c9d4e176e7910abc523bfe5a2543432b99fe25c5b200a62740f23a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7rq44" podUID="fd1a984e-51b4-4231-b86e-dec6833958fb" Dec 16 03:18:05.707523 containerd[1622]: time="2025-12-16T03:18:05.707471888Z" level=error msg="Failed to destroy network for sandbox \"6b48b056e4c8ce693856d1d80d8cd3d959dd29532a09370c0688102cc47696f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.710326 containerd[1622]: time="2025-12-16T03:18:05.710264268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4q4f,Uid:e0e61ffb-fcc3-4b0c-98ff-1e34abeff548,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b48b056e4c8ce693856d1d80d8cd3d959dd29532a09370c0688102cc47696f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.710834 kubelet[2859]: E1216 03:18:05.710772 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b48b056e4c8ce693856d1d80d8cd3d959dd29532a09370c0688102cc47696f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.710913 kubelet[2859]: E1216 03:18:05.710865 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b48b056e4c8ce693856d1d80d8cd3d959dd29532a09370c0688102cc47696f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m4q4f" Dec 16 03:18:05.710913 kubelet[2859]: E1216 03:18:05.710900 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b48b056e4c8ce693856d1d80d8cd3d959dd29532a09370c0688102cc47696f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m4q4f" Dec 16 03:18:05.711079 kubelet[2859]: E1216 03:18:05.710997 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m4q4f_kube-system(e0e61ffb-fcc3-4b0c-98ff-1e34abeff548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m4q4f_kube-system(e0e61ffb-fcc3-4b0c-98ff-1e34abeff548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b48b056e4c8ce693856d1d80d8cd3d959dd29532a09370c0688102cc47696f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m4q4f" podUID="e0e61ffb-fcc3-4b0c-98ff-1e34abeff548" Dec 16 03:18:05.713326 containerd[1622]: time="2025-12-16T03:18:05.713186822Z" level=error msg="Failed to destroy network for sandbox \"e7ab3665e6c7a5a6d8400f2ad50da12a3d083c3e3e14477608d11bf05f737c78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.714865 containerd[1622]: time="2025-12-16T03:18:05.714232264Z" level=error msg="Failed to destroy network for sandbox \"09c61004dd069630442ee5581e06ba5f4fa72afebe3d6d7c07f024ec177986a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.715107 containerd[1622]: time="2025-12-16T03:18:05.714820959Z" level=error msg="Failed to destroy network for sandbox \"68f6510d9c7dbfff3a02d2c540bafff1feac1e5e2c2e895040829b81630e2fc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.717248 containerd[1622]: time="2025-12-16T03:18:05.717006289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d5cb44968-bmmvj,Uid:04744c4e-2de2-49b5-8dbb-8f920ac6f068,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ab3665e6c7a5a6d8400f2ad50da12a3d083c3e3e14477608d11bf05f737c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.719086 kubelet[2859]: E1216 03:18:05.718081 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ab3665e6c7a5a6d8400f2ad50da12a3d083c3e3e14477608d11bf05f737c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.719086 kubelet[2859]: E1216 03:18:05.718158 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ab3665e6c7a5a6d8400f2ad50da12a3d083c3e3e14477608d11bf05f737c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" Dec 16 03:18:05.719086 kubelet[2859]: E1216 03:18:05.718181 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ab3665e6c7a5a6d8400f2ad50da12a3d083c3e3e14477608d11bf05f737c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" Dec 16 03:18:05.719240 kubelet[2859]: E1216 03:18:05.718244 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d5cb44968-bmmvj_calico-system(04744c4e-2de2-49b5-8dbb-8f920ac6f068)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d5cb44968-bmmvj_calico-system(04744c4e-2de2-49b5-8dbb-8f920ac6f068)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7ab3665e6c7a5a6d8400f2ad50da12a3d083c3e3e14477608d11bf05f737c78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:18:05.720692 containerd[1622]: time="2025-12-16T03:18:05.720612777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689f849d56-bpgq2,Uid:8653bc32-7498-477b-85bb-6a749d44ed95,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c61004dd069630442ee5581e06ba5f4fa72afebe3d6d7c07f024ec177986a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.722019 kubelet[2859]: E1216 03:18:05.720909 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c61004dd069630442ee5581e06ba5f4fa72afebe3d6d7c07f024ec177986a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.722019 kubelet[2859]: E1216 03:18:05.721020 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c61004dd069630442ee5581e06ba5f4fa72afebe3d6d7c07f024ec177986a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" Dec 16 03:18:05.722019 kubelet[2859]: E1216 03:18:05.721056 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09c61004dd069630442ee5581e06ba5f4fa72afebe3d6d7c07f024ec177986a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" Dec 16 03:18:05.722183 kubelet[2859]: E1216 03:18:05.721192 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-689f849d56-bpgq2_calico-apiserver(8653bc32-7498-477b-85bb-6a749d44ed95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-689f849d56-bpgq2_calico-apiserver(8653bc32-7498-477b-85bb-6a749d44ed95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09c61004dd069630442ee5581e06ba5f4fa72afebe3d6d7c07f024ec177986a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:18:05.722238 containerd[1622]: time="2025-12-16T03:18:05.721650784Z" level=error msg="Failed to destroy network for sandbox \"cec91de39480cb77437dde8fe81d145796646dfd6fa6d09d3cc37b97293d7326\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.725494 containerd[1622]: time="2025-12-16T03:18:05.725386374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7957f58bff-stqnx,Uid:9ce85cb3-b361-4315-9638-617f5ef6e8f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cec91de39480cb77437dde8fe81d145796646dfd6fa6d09d3cc37b97293d7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.726154 kubelet[2859]: E1216 03:18:05.726090 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cec91de39480cb77437dde8fe81d145796646dfd6fa6d09d3cc37b97293d7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.726333 kubelet[2859]: E1216 03:18:05.726302 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cec91de39480cb77437dde8fe81d145796646dfd6fa6d09d3cc37b97293d7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7957f58bff-stqnx" Dec 16 03:18:05.726416 containerd[1622]: time="2025-12-16T03:18:05.726353640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f875f69d-gkvpq,Uid:b257371e-206f-4ea1-9c38-a798a97242ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f6510d9c7dbfff3a02d2c540bafff1feac1e5e2c2e895040829b81630e2fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.726503 kubelet[2859]: E1216 03:18:05.726416 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cec91de39480cb77437dde8fe81d145796646dfd6fa6d09d3cc37b97293d7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7957f58bff-stqnx" Dec 16 03:18:05.726542 kubelet[2859]: E1216 03:18:05.726490 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7957f58bff-stqnx_calico-system(9ce85cb3-b361-4315-9638-617f5ef6e8f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7957f58bff-stqnx_calico-system(9ce85cb3-b361-4315-9638-617f5ef6e8f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cec91de39480cb77437dde8fe81d145796646dfd6fa6d09d3cc37b97293d7326\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7957f58bff-stqnx" podUID="9ce85cb3-b361-4315-9638-617f5ef6e8f1" Dec 16 03:18:05.726625 kubelet[2859]: E1216 03:18:05.726591 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f6510d9c7dbfff3a02d2c540bafff1feac1e5e2c2e895040829b81630e2fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.726669 kubelet[2859]: E1216 03:18:05.726628 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f6510d9c7dbfff3a02d2c540bafff1feac1e5e2c2e895040829b81630e2fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" Dec 16 03:18:05.726669 kubelet[2859]: E1216 03:18:05.726647 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f6510d9c7dbfff3a02d2c540bafff1feac1e5e2c2e895040829b81630e2fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" Dec 16 03:18:05.726740 kubelet[2859]: E1216 03:18:05.726684 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84f875f69d-gkvpq_calico-apiserver(b257371e-206f-4ea1-9c38-a798a97242ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84f875f69d-gkvpq_calico-apiserver(b257371e-206f-4ea1-9c38-a798a97242ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f6510d9c7dbfff3a02d2c540bafff1feac1e5e2c2e895040829b81630e2fc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:18:05.729724 containerd[1622]: time="2025-12-16T03:18:05.729665524Z" level=error msg="Failed to destroy network for sandbox \"9acd315d98397a79077bba519d2943a365e0f9bd7a0ed81aa08d65e3d4e3803a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.733622 containerd[1622]: time="2025-12-16T03:18:05.733532150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p7kjv,Uid:296bf1a5-e8cc-4c0a-a28b-66e55ce1e788,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9acd315d98397a79077bba519d2943a365e0f9bd7a0ed81aa08d65e3d4e3803a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.733840 containerd[1622]: time="2025-12-16T03:18:05.733656593Z" level=error msg="Failed to destroy network for sandbox \"097d1e6e82129cb88ce91a08cf774a86aaca596c7b22d6cb5cbd487dda75028d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.733981 kubelet[2859]: E1216 03:18:05.733903 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9acd315d98397a79077bba519d2943a365e0f9bd7a0ed81aa08d65e3d4e3803a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.734054 kubelet[2859]: E1216 03:18:05.733994 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9acd315d98397a79077bba519d2943a365e0f9bd7a0ed81aa08d65e3d4e3803a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p7kjv" Dec 16 03:18:05.734054 kubelet[2859]: E1216 03:18:05.734025 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9acd315d98397a79077bba519d2943a365e0f9bd7a0ed81aa08d65e3d4e3803a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p7kjv" Dec 16 03:18:05.734131 kubelet[2859]: E1216 03:18:05.734088 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9acd315d98397a79077bba519d2943a365e0f9bd7a0ed81aa08d65e3d4e3803a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:18:05.736285 containerd[1622]: time="2025-12-16T03:18:05.736133211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f875f69d-hv8xn,Uid:9063305a-9611-4527-b78f-50cf768d4751,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"097d1e6e82129cb88ce91a08cf774a86aaca596c7b22d6cb5cbd487dda75028d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.736518 kubelet[2859]: E1216 03:18:05.736460 2859 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097d1e6e82129cb88ce91a08cf774a86aaca596c7b22d6cb5cbd487dda75028d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:18:05.736659 kubelet[2859]: E1216 03:18:05.736527 2859 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097d1e6e82129cb88ce91a08cf774a86aaca596c7b22d6cb5cbd487dda75028d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" Dec 16 03:18:05.736659 kubelet[2859]: E1216 03:18:05.736547 2859 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097d1e6e82129cb88ce91a08cf774a86aaca596c7b22d6cb5cbd487dda75028d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" Dec 16 03:18:05.736659 kubelet[2859]: E1216 03:18:05.736611 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84f875f69d-hv8xn_calico-apiserver(9063305a-9611-4527-b78f-50cf768d4751)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84f875f69d-hv8xn_calico-apiserver(9063305a-9611-4527-b78f-50cf768d4751)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"097d1e6e82129cb88ce91a08cf774a86aaca596c7b22d6cb5cbd487dda75028d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:18:06.398836 systemd[1]: run-netns-cni\x2d4cc57295\x2db066\x2dfcaf\x2d7871\x2dd16b32b22bbd.mount: Deactivated successfully. Dec 16 03:18:06.399013 systemd[1]: run-netns-cni\x2d0fb79e41\x2d5eb2\x2d534a\x2db52c\x2d3bbc35b83d71.mount: Deactivated successfully. Dec 16 03:18:06.399123 systemd[1]: run-netns-cni\x2dd8d05dee\x2d9f39\x2dfa6d\x2d63e7\x2d5eb9f988ca94.mount: Deactivated successfully. Dec 16 03:18:06.399222 systemd[1]: run-netns-cni\x2d442eba82\x2deb05\x2d2d87\x2d2952\x2d97e5331b4547.mount: Deactivated successfully. Dec 16 03:18:06.399315 systemd[1]: run-netns-cni\x2d67f10860\x2dca76\x2d136d\x2d60e5\x2daf95b84fc17c.mount: Deactivated successfully. Dec 16 03:18:06.399398 systemd[1]: run-netns-cni\x2d94f7ee5e\x2dd3f5\x2d0f71\x2d57ad\x2d26d68bb09648.mount: Deactivated successfully. Dec 16 03:18:09.262525 kubelet[2859]: I1216 03:18:09.262480 2859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:18:09.263226 kubelet[2859]: E1216 03:18:09.262989 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:09.337000 audit[3959]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:09.343113 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 03:18:09.343244 kernel: audit: type=1325 audit(1765855089.337:561): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:09.337000 audit[3959]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5c16f910 a2=0 a3=7fff5c16f8fc items=0 ppid=3025 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:09.350117 kernel: audit: type=1300 audit(1765855089.337:561): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5c16f910 a2=0 a3=7fff5c16f8fc items=0 ppid=3025 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:09.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:09.349000 audit[3959]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:09.356361 kernel: audit: type=1327 audit(1765855089.337:561): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:09.356502 kernel: audit: type=1325 audit(1765855089.349:562): table=nat:118 family=2 entries=19 op=nft_register_chain pid=3959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:09.356546 kernel: audit: type=1300 audit(1765855089.349:562): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff5c16f910 a2=0 a3=7fff5c16f8fc items=0 ppid=3025 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:09.349000 audit[3959]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff5c16f910 a2=0 a3=7fff5c16f8fc items=0 ppid=3025 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:09.367187 kernel: audit: type=1327 audit(1765855089.349:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:09.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:09.458765 kubelet[2859]: E1216 03:18:09.458701 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:10.257968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437596668.mount: Deactivated successfully. Dec 16 03:18:12.391655 containerd[1622]: time="2025-12-16T03:18:12.391574817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:18:12.398741 containerd[1622]: time="2025-12-16T03:18:12.398648084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:18:12.407983 containerd[1622]: time="2025-12-16T03:18:12.407905055Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:18:12.417524 containerd[1622]: time="2025-12-16T03:18:12.417447403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:18:12.418338 containerd[1622]: time="2025-12-16T03:18:12.418268909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.967493266s" Dec 16 03:18:12.418338 containerd[1622]: time="2025-12-16T03:18:12.418328873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:18:12.455344 containerd[1622]: time="2025-12-16T03:18:12.455279940Z" level=info msg="CreateContainer within sandbox \"db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:18:12.517434 containerd[1622]: time="2025-12-16T03:18:12.517353600Z" level=info msg="Container c4a10e20cc98323ddf9eb296951d748445815372e570cc98e80c9bea19f1db5c: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:18:12.519536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395671477.mount: Deactivated successfully. Dec 16 03:18:12.533078 containerd[1622]: time="2025-12-16T03:18:12.533022113Z" level=info msg="CreateContainer within sandbox \"db7ff07b62158eea8f4214e5bc5f8ee893bac82b5832faae0d29d1cac970d340\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c4a10e20cc98323ddf9eb296951d748445815372e570cc98e80c9bea19f1db5c\"" Dec 16 03:18:12.533767 containerd[1622]: time="2025-12-16T03:18:12.533696763Z" level=info msg="StartContainer for \"c4a10e20cc98323ddf9eb296951d748445815372e570cc98e80c9bea19f1db5c\"" Dec 16 03:18:12.535553 containerd[1622]: time="2025-12-16T03:18:12.535507883Z" level=info msg="connecting to shim c4a10e20cc98323ddf9eb296951d748445815372e570cc98e80c9bea19f1db5c" address="unix:///run/containerd/s/a1dd17bf2ac0d61c1537d6fd8b1b9923367a4540d20b356341a5f758b26df165" protocol=ttrpc version=3 Dec 16 03:18:12.619302 systemd[1]: Started cri-containerd-c4a10e20cc98323ddf9eb296951d748445815372e570cc98e80c9bea19f1db5c.scope - libcontainer container c4a10e20cc98323ddf9eb296951d748445815372e570cc98e80c9bea19f1db5c. Dec 16 03:18:12.713000 audit: BPF prog-id=170 op=LOAD Dec 16 03:18:12.713000 audit[3964]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3381 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:12.722401 kernel: audit: type=1334 audit(1765855092.713:563): prog-id=170 op=LOAD Dec 16 03:18:12.722572 kernel: audit: type=1300 audit(1765855092.713:563): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3381 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:12.722646 kernel: audit: type=1327 audit(1765855092.713:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613130653230636339383332336464663965623239363935316437 Dec 16 03:18:12.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613130653230636339383332336464663965623239363935316437 Dec 16 03:18:12.713000 audit: BPF prog-id=171 op=LOAD Dec 16 03:18:12.729631 kernel: audit: type=1334 audit(1765855092.713:564): prog-id=171 op=LOAD Dec 16 03:18:12.713000 audit[3964]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3381 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:12.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613130653230636339383332336464663965623239363935316437 Dec 16 03:18:12.713000 audit: BPF prog-id=171 op=UNLOAD Dec 16 03:18:12.713000 audit[3964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:12.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613130653230636339383332336464663965623239363935316437 Dec 16 03:18:12.713000 audit: BPF prog-id=170 op=UNLOAD Dec 16 03:18:12.713000 audit[3964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3381 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:12.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613130653230636339383332336464663965623239363935316437 Dec 16 03:18:12.713000 audit: BPF prog-id=172 op=LOAD Dec 16 03:18:12.713000 audit[3964]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3381 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:12.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334613130653230636339383332336464663965623239363935316437 Dec 16 03:18:12.779657 containerd[1622]: time="2025-12-16T03:18:12.779598692Z" level=info msg="StartContainer for \"c4a10e20cc98323ddf9eb296951d748445815372e570cc98e80c9bea19f1db5c\" returns successfully" Dec 16 03:18:12.861315 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:18:12.861495 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:18:13.180840 kubelet[2859]: I1216 03:18:13.180771 2859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ce85cb3-b361-4315-9638-617f5ef6e8f1-whisker-ca-bundle\") pod \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\" (UID: \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\") " Dec 16 03:18:13.180840 kubelet[2859]: I1216 03:18:13.180850 2859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mxh7\" (UniqueName: \"kubernetes.io/projected/9ce85cb3-b361-4315-9638-617f5ef6e8f1-kube-api-access-6mxh7\") pod \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\" (UID: \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\") " Dec 16 03:18:13.181395 kubelet[2859]: I1216 03:18:13.180889 2859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ce85cb3-b361-4315-9638-617f5ef6e8f1-whisker-backend-key-pair\") pod \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\" (UID: \"9ce85cb3-b361-4315-9638-617f5ef6e8f1\") " Dec 16 03:18:13.183139 kubelet[2859]: I1216 03:18:13.183049 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce85cb3-b361-4315-9638-617f5ef6e8f1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9ce85cb3-b361-4315-9638-617f5ef6e8f1" (UID: "9ce85cb3-b361-4315-9638-617f5ef6e8f1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:18:13.191806 kubelet[2859]: I1216 03:18:13.191681 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce85cb3-b361-4315-9638-617f5ef6e8f1-kube-api-access-6mxh7" (OuterVolumeSpecName: "kube-api-access-6mxh7") pod "9ce85cb3-b361-4315-9638-617f5ef6e8f1" (UID: "9ce85cb3-b361-4315-9638-617f5ef6e8f1"). InnerVolumeSpecName "kube-api-access-6mxh7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:18:13.192461 kubelet[2859]: I1216 03:18:13.192422 2859 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce85cb3-b361-4315-9638-617f5ef6e8f1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9ce85cb3-b361-4315-9638-617f5ef6e8f1" (UID: "9ce85cb3-b361-4315-9638-617f5ef6e8f1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:18:13.281672 kubelet[2859]: I1216 03:18:13.281598 2859 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6mxh7\" (UniqueName: \"kubernetes.io/projected/9ce85cb3-b361-4315-9638-617f5ef6e8f1-kube-api-access-6mxh7\") on node \"localhost\" DevicePath \"\"" Dec 16 03:18:13.281672 kubelet[2859]: I1216 03:18:13.281648 2859 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ce85cb3-b361-4315-9638-617f5ef6e8f1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 03:18:13.281672 kubelet[2859]: I1216 03:18:13.281664 2859 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ce85cb3-b361-4315-9638-617f5ef6e8f1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 03:18:13.294710 systemd[1]: Removed slice kubepods-besteffort-pod9ce85cb3_b361_4315_9638_617f5ef6e8f1.slice - libcontainer container kubepods-besteffort-pod9ce85cb3_b361_4315_9638_617f5ef6e8f1.slice. Dec 16 03:18:13.425096 systemd[1]: var-lib-kubelet-pods-9ce85cb3\x2db361\x2d4315\x2d9638\x2d617f5ef6e8f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6mxh7.mount: Deactivated successfully. Dec 16 03:18:13.425240 systemd[1]: var-lib-kubelet-pods-9ce85cb3\x2db361\x2d4315\x2d9638\x2d617f5ef6e8f1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:18:13.473432 kubelet[2859]: E1216 03:18:13.473318 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:13.495644 kubelet[2859]: I1216 03:18:13.494748 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vqhw4" podStartSLOduration=1.578844057 podStartE2EDuration="18.494728705s" podCreationTimestamp="2025-12-16 03:17:55 +0000 UTC" firstStartedPulling="2025-12-16 03:17:55.503279366 +0000 UTC m=+20.324469738" lastFinishedPulling="2025-12-16 03:18:12.419164015 +0000 UTC m=+37.240354386" observedRunningTime="2025-12-16 03:18:13.49213495 +0000 UTC m=+38.313325321" watchObservedRunningTime="2025-12-16 03:18:13.494728705 +0000 UTC m=+38.315919086" Dec 16 03:18:13.560940 systemd[1]: Created slice kubepods-besteffort-podff2dd774_9ae6_475f_a701_99b3146ecb38.slice - libcontainer container kubepods-besteffort-podff2dd774_9ae6_475f_a701_99b3146ecb38.slice. Dec 16 03:18:13.684807 kubelet[2859]: I1216 03:18:13.684728 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ff2dd774-9ae6-475f-a701-99b3146ecb38-whisker-backend-key-pair\") pod \"whisker-76d79fbfdc-v5h7c\" (UID: \"ff2dd774-9ae6-475f-a701-99b3146ecb38\") " pod="calico-system/whisker-76d79fbfdc-v5h7c" Dec 16 03:18:13.684807 kubelet[2859]: I1216 03:18:13.684805 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff2dd774-9ae6-475f-a701-99b3146ecb38-whisker-ca-bundle\") pod \"whisker-76d79fbfdc-v5h7c\" (UID: \"ff2dd774-9ae6-475f-a701-99b3146ecb38\") " pod="calico-system/whisker-76d79fbfdc-v5h7c" Dec 16 03:18:13.684807 kubelet[2859]: I1216 03:18:13.684828 2859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx6vc\" (UniqueName: \"kubernetes.io/projected/ff2dd774-9ae6-475f-a701-99b3146ecb38-kube-api-access-lx6vc\") pod \"whisker-76d79fbfdc-v5h7c\" (UID: \"ff2dd774-9ae6-475f-a701-99b3146ecb38\") " pod="calico-system/whisker-76d79fbfdc-v5h7c" Dec 16 03:18:13.878798 containerd[1622]: time="2025-12-16T03:18:13.878742852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d79fbfdc-v5h7c,Uid:ff2dd774-9ae6-475f-a701-99b3146ecb38,Namespace:calico-system,Attempt:0,}" Dec 16 03:18:14.058236 systemd-networkd[1519]: cali038f11b351e: Link UP Dec 16 03:18:14.058998 systemd-networkd[1519]: cali038f11b351e: Gained carrier Dec 16 03:18:14.077514 containerd[1622]: 2025-12-16 03:18:13.906 [INFO][4032] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:18:14.077514 containerd[1622]: 2025-12-16 03:18:13.928 [INFO][4032] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0 whisker-76d79fbfdc- calico-system ff2dd774-9ae6-475f-a701-99b3146ecb38 937 0 2025-12-16 03:18:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76d79fbfdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76d79fbfdc-v5h7c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali038f11b351e [] [] }} ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Namespace="calico-system" Pod="whisker-76d79fbfdc-v5h7c" WorkloadEndpoint="localhost-k8s-whisker--76d79fbfdc--v5h7c-" Dec 16 03:18:14.077514 containerd[1622]: 2025-12-16 03:18:13.929 [INFO][4032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Namespace="calico-system" Pod="whisker-76d79fbfdc-v5h7c" WorkloadEndpoint="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" Dec 16 03:18:14.077514 containerd[1622]: 2025-12-16 03:18:14.008 [INFO][4045] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" HandleID="k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Workload="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.009 [INFO][4045] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" HandleID="k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Workload="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030f850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76d79fbfdc-v5h7c", "timestamp":"2025-12-16 03:18:14.00831232 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.009 [INFO][4045] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.009 [INFO][4045] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.010 [INFO][4045] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.020 [INFO][4045] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" host="localhost" Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.028 [INFO][4045] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.033 [INFO][4045] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.035 [INFO][4045] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.037 [INFO][4045] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:14.077817 containerd[1622]: 2025-12-16 03:18:14.037 [INFO][4045] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" host="localhost" Dec 16 03:18:14.078187 containerd[1622]: 2025-12-16 03:18:14.038 [INFO][4045] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232 Dec 16 03:18:14.078187 containerd[1622]: 2025-12-16 03:18:14.042 [INFO][4045] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" host="localhost" Dec 16 03:18:14.078187 containerd[1622]: 2025-12-16 03:18:14.047 [INFO][4045] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" host="localhost" Dec 16 03:18:14.078187 containerd[1622]: 2025-12-16 03:18:14.047 [INFO][4045] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" host="localhost" Dec 16 03:18:14.078187 containerd[1622]: 2025-12-16 03:18:14.047 [INFO][4045] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:14.078187 containerd[1622]: 2025-12-16 03:18:14.047 [INFO][4045] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" HandleID="k8s-pod-network.e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Workload="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" Dec 16 03:18:14.078354 containerd[1622]: 2025-12-16 03:18:14.050 [INFO][4032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Namespace="calico-system" Pod="whisker-76d79fbfdc-v5h7c" WorkloadEndpoint="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0", GenerateName:"whisker-76d79fbfdc-", Namespace:"calico-system", SelfLink:"", UID:"ff2dd774-9ae6-475f-a701-99b3146ecb38", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76d79fbfdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76d79fbfdc-v5h7c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali038f11b351e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:14.078354 containerd[1622]: 2025-12-16 03:18:14.050 [INFO][4032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Namespace="calico-system" Pod="whisker-76d79fbfdc-v5h7c" WorkloadEndpoint="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" Dec 16 03:18:14.078490 containerd[1622]: 2025-12-16 03:18:14.051 [INFO][4032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali038f11b351e ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Namespace="calico-system" Pod="whisker-76d79fbfdc-v5h7c" WorkloadEndpoint="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" Dec 16 03:18:14.078490 containerd[1622]: 2025-12-16 03:18:14.060 [INFO][4032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Namespace="calico-system" Pod="whisker-76d79fbfdc-v5h7c" WorkloadEndpoint="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" Dec 16 03:18:14.078552 containerd[1622]: 2025-12-16 03:18:14.061 [INFO][4032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Namespace="calico-system" Pod="whisker-76d79fbfdc-v5h7c" WorkloadEndpoint="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0", GenerateName:"whisker-76d79fbfdc-", Namespace:"calico-system", SelfLink:"", UID:"ff2dd774-9ae6-475f-a701-99b3146ecb38", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76d79fbfdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232", Pod:"whisker-76d79fbfdc-v5h7c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali038f11b351e", MAC:"c6:09:5d:56:01:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:14.078650 containerd[1622]: 2025-12-16 03:18:14.074 [INFO][4032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" Namespace="calico-system" Pod="whisker-76d79fbfdc-v5h7c" WorkloadEndpoint="localhost-k8s-whisker--76d79fbfdc--v5h7c-eth0" Dec 16 03:18:14.205903 containerd[1622]: time="2025-12-16T03:18:14.205737641Z" level=info msg="connecting to shim e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232" address="unix:///run/containerd/s/601cf146a557ccb771a4a343c50149a22a2890cc8d70417f841ec08656c42468" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:14.284281 systemd[1]: Started cri-containerd-e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232.scope - libcontainer container e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232. Dec 16 03:18:14.311000 audit: BPF prog-id=173 op=LOAD Dec 16 03:18:14.313000 audit: BPF prog-id=174 op=LOAD Dec 16 03:18:14.313000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532666235643137653562653233316133303335613030363431323231 Dec 16 03:18:14.313000 audit: BPF prog-id=174 op=UNLOAD Dec 16 03:18:14.313000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532666235643137653562653233316133303335613030363431323231 Dec 16 03:18:14.313000 audit: BPF prog-id=175 op=LOAD Dec 16 03:18:14.313000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532666235643137653562653233316133303335613030363431323231 Dec 16 03:18:14.314000 audit: BPF prog-id=176 op=LOAD Dec 16 03:18:14.314000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532666235643137653562653233316133303335613030363431323231 Dec 16 03:18:14.314000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:18:14.314000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532666235643137653562653233316133303335613030363431323231 Dec 16 03:18:14.314000 audit: BPF prog-id=175 op=UNLOAD Dec 16 03:18:14.314000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532666235643137653562653233316133303335613030363431323231 Dec 16 03:18:14.314000 audit: BPF prog-id=177 op=LOAD Dec 16 03:18:14.314000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4069 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532666235643137653562653233316133303335613030363431323231 Dec 16 03:18:14.316644 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:14.745863 containerd[1622]: time="2025-12-16T03:18:14.745768815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d79fbfdc-v5h7c,Uid:ff2dd774-9ae6-475f-a701-99b3146ecb38,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2fb5d17e5be231a3035a00641221c889f6d467eaa3d570049fe7a09dcd07232\"" Dec 16 03:18:14.752719 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 16 03:18:14.752846 kernel: audit: type=1334 audit(1765855094.747:576): prog-id=178 op=LOAD Dec 16 03:18:14.747000 audit: BPF prog-id=178 op=LOAD Dec 16 03:18:14.747000 audit[4234]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc15cd6f00 a2=98 a3=1fffffffffffffff items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.760983 kernel: audit: type=1300 audit(1765855094.747:576): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc15cd6f00 a2=98 a3=1fffffffffffffff items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.747000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.767921 containerd[1622]: time="2025-12-16T03:18:14.764721276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:18:14.768129 kernel: audit: type=1327 audit(1765855094.747:576): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.748000 audit: BPF prog-id=178 op=UNLOAD Dec 16 03:18:14.778783 kernel: audit: type=1334 audit(1765855094.748:577): prog-id=178 op=UNLOAD Dec 16 03:18:14.778949 kernel: audit: type=1300 audit(1765855094.748:577): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc15cd6ed0 a3=0 items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.748000 audit[4234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc15cd6ed0 a3=0 items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.748000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.749000 audit: BPF prog-id=179 op=LOAD Dec 16 03:18:14.791030 kernel: audit: type=1327 audit(1765855094.748:577): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.791137 kernel: audit: type=1334 audit(1765855094.749:578): prog-id=179 op=LOAD Dec 16 03:18:14.791174 kernel: audit: type=1300 audit(1765855094.749:578): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc15cd6de0 a2=94 a3=3 items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.749000 audit[4234]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc15cd6de0 a2=94 a3=3 items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.803866 kernel: audit: type=1327 audit(1765855094.749:578): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.804006 kernel: audit: type=1334 audit(1765855094.749:579): prog-id=179 op=UNLOAD Dec 16 03:18:14.749000 audit: BPF prog-id=179 op=UNLOAD Dec 16 03:18:14.749000 audit[4234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc15cd6de0 a2=94 a3=3 items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.749000 audit: BPF prog-id=180 op=LOAD Dec 16 03:18:14.749000 audit[4234]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc15cd6e20 a2=94 a3=7ffc15cd7000 items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.749000 audit: BPF prog-id=180 op=UNLOAD Dec 16 03:18:14.749000 audit[4234]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc15cd6e20 a2=94 a3=7ffc15cd7000 items=0 ppid=4109 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.749000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:18:14.751000 audit: BPF prog-id=181 op=LOAD Dec 16 03:18:14.751000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd2e651770 a2=98 a3=3 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.751000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.751000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:18:14.751000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd2e651740 a3=0 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.751000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.754000 audit: BPF prog-id=182 op=LOAD Dec 16 03:18:14.754000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd2e651560 a2=94 a3=54428f items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.754000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.755000 audit: BPF prog-id=182 op=UNLOAD Dec 16 03:18:14.755000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd2e651560 a2=94 a3=54428f items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.755000 audit: BPF prog-id=183 op=LOAD Dec 16 03:18:14.755000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd2e651590 a2=94 a3=2 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.755000 audit: BPF prog-id=183 op=UNLOAD Dec 16 03:18:14.755000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd2e651590 a2=0 a3=2 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.988000 audit: BPF prog-id=184 op=LOAD Dec 16 03:18:14.988000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd2e651450 a2=94 a3=1 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.988000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.988000 audit: BPF prog-id=184 op=UNLOAD Dec 16 03:18:14.988000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd2e651450 a2=94 a3=1 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.988000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.999000 audit: BPF prog-id=185 op=LOAD Dec 16 03:18:14.999000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd2e651440 a2=94 a3=4 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.999000 audit: BPF prog-id=185 op=UNLOAD Dec 16 03:18:14.999000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd2e651440 a2=0 a3=4 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.999000 audit: BPF prog-id=186 op=LOAD Dec 16 03:18:14.999000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd2e6512a0 a2=94 a3=5 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.999000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:18:14.999000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd2e6512a0 a2=0 a3=5 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.999000 audit: BPF prog-id=187 op=LOAD Dec 16 03:18:14.999000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd2e6514c0 a2=94 a3=6 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:14.999000 audit: BPF prog-id=187 op=UNLOAD Dec 16 03:18:14.999000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd2e6514c0 a2=0 a3=6 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:14.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:15.000000 audit: BPF prog-id=188 op=LOAD Dec 16 03:18:15.000000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd2e650c70 a2=94 a3=88 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:15.000000 audit: BPF prog-id=189 op=LOAD Dec 16 03:18:15.000000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd2e650af0 a2=94 a3=2 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:15.000000 audit: BPF prog-id=189 op=UNLOAD Dec 16 03:18:15.000000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd2e650b20 a2=0 a3=7ffd2e650c20 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:15.001000 audit: BPF prog-id=188 op=UNLOAD Dec 16 03:18:15.001000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=334d7d10 a2=0 a3=fcb39771999445f8 items=0 ppid=4109 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.001000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:18:15.010000 audit: BPF prog-id=190 op=LOAD Dec 16 03:18:15.010000 audit[4242]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9a0a1ec0 a2=98 a3=1999999999999999 items=0 ppid=4109 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:18:15.010000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:18:15.010000 audit[4242]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9a0a1e90 a3=0 items=0 ppid=4109 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:18:15.010000 audit: BPF prog-id=191 op=LOAD Dec 16 03:18:15.010000 audit[4242]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9a0a1da0 a2=94 a3=ffff items=0 ppid=4109 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:18:15.010000 audit: BPF prog-id=191 op=UNLOAD Dec 16 03:18:15.010000 audit[4242]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9a0a1da0 a2=94 a3=ffff items=0 ppid=4109 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:18:15.010000 audit: BPF prog-id=192 op=LOAD Dec 16 03:18:15.010000 audit[4242]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9a0a1de0 a2=94 a3=7fff9a0a1fc0 items=0 ppid=4109 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:18:15.010000 audit: BPF prog-id=192 op=UNLOAD Dec 16 03:18:15.010000 audit[4242]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9a0a1de0 a2=94 a3=7fff9a0a1fc0 items=0 ppid=4109 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:18:15.077443 systemd-networkd[1519]: vxlan.calico: Link UP Dec 16 03:18:15.077456 systemd-networkd[1519]: vxlan.calico: Gained carrier Dec 16 03:18:15.099000 audit: BPF prog-id=193 op=LOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff84281020 a2=98 a3=0 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.099000 audit: BPF prog-id=193 op=UNLOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff84280ff0 a3=0 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.099000 audit: BPF prog-id=194 op=LOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff84280e30 a2=94 a3=54428f items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.099000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff84280e30 a2=94 a3=54428f items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.099000 audit: BPF prog-id=195 op=LOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff84280e60 a2=94 a3=2 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.099000 audit: BPF prog-id=195 op=UNLOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff84280e60 a2=0 a3=2 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.099000 audit: BPF prog-id=196 op=LOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff84280c10 a2=94 a3=4 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.099000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff84280c10 a2=94 a3=4 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.099000 audit: BPF prog-id=197 op=LOAD Dec 16 03:18:15.099000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff84280d10 a2=94 a3=7fff84280e90 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.100000 audit: BPF prog-id=197 op=UNLOAD Dec 16 03:18:15.100000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff84280d10 a2=0 a3=7fff84280e90 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.101000 audit: BPF prog-id=198 op=LOAD Dec 16 03:18:15.101000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff84280440 a2=94 a3=2 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.101000 audit: BPF prog-id=198 op=UNLOAD Dec 16 03:18:15.101000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff84280440 a2=0 a3=2 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.101000 audit: BPF prog-id=199 op=LOAD Dec 16 03:18:15.101000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff84280540 a2=94 a3=30 items=0 ppid=4109 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:18:15.110000 audit: BPF prog-id=200 op=LOAD Dec 16 03:18:15.110000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc401e5b0 a2=98 a3=0 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.110000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:18:15.110000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffc401e580 a3=0 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.110000 audit: BPF prog-id=201 op=LOAD Dec 16 03:18:15.110000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc401e3a0 a2=94 a3=54428f items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.111000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:18:15.111000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffc401e3a0 a2=94 a3=54428f items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.111000 audit: BPF prog-id=202 op=LOAD Dec 16 03:18:15.111000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc401e3d0 a2=94 a3=2 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.111000 audit: BPF prog-id=202 op=UNLOAD Dec 16 03:18:15.111000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffc401e3d0 a2=0 a3=2 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.199128 containerd[1622]: time="2025-12-16T03:18:15.199062963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:15.203438 containerd[1622]: time="2025-12-16T03:18:15.203393546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:18:15.203523 containerd[1622]: time="2025-12-16T03:18:15.203411402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:15.203686 kubelet[2859]: E1216 03:18:15.203634 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:18:15.204070 kubelet[2859]: E1216 03:18:15.203693 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:18:15.211177 kubelet[2859]: E1216 03:18:15.211114 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f11ab101807549ab927f64e4835ff8ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lx6vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d79fbfdc-v5h7c_calico-system(ff2dd774-9ae6-475f-a701-99b3146ecb38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:15.213621 containerd[1622]: time="2025-12-16T03:18:15.213576729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:18:15.287495 kubelet[2859]: I1216 03:18:15.287439 2859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce85cb3-b361-4315-9638-617f5ef6e8f1" path="/var/lib/kubelet/pods/9ce85cb3-b361-4315-9638-617f5ef6e8f1/volumes" Dec 16 03:18:15.315000 audit: BPF prog-id=203 op=LOAD Dec 16 03:18:15.315000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffc401e290 a2=94 a3=1 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.315000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.315000 audit: BPF prog-id=203 op=UNLOAD Dec 16 03:18:15.315000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffc401e290 a2=94 a3=1 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.315000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.325000 audit: BPF prog-id=204 op=LOAD Dec 16 03:18:15.325000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffc401e280 a2=94 a3=4 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.325000 audit: BPF prog-id=204 op=UNLOAD Dec 16 03:18:15.325000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffc401e280 a2=0 a3=4 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.325000 audit: BPF prog-id=205 op=LOAD Dec 16 03:18:15.325000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc401e0e0 a2=94 a3=5 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.326000 audit: BPF prog-id=205 op=UNLOAD Dec 16 03:18:15.326000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffc401e0e0 a2=0 a3=5 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.326000 audit: BPF prog-id=206 op=LOAD Dec 16 03:18:15.326000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffc401e300 a2=94 a3=6 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.326000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:18:15.326000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffc401e300 a2=0 a3=6 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.326000 audit: BPF prog-id=207 op=LOAD Dec 16 03:18:15.326000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffc401dab0 a2=94 a3=88 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.326000 audit: BPF prog-id=208 op=LOAD Dec 16 03:18:15.326000 audit[4272]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffc401d930 a2=94 a3=2 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.327000 audit: BPF prog-id=208 op=UNLOAD Dec 16 03:18:15.327000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffc401d960 a2=0 a3=7fffc401da60 items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.327000 audit: BPF prog-id=207 op=UNLOAD Dec 16 03:18:15.327000 audit[4272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3d742d10 a2=0 a3=eecc8a41a4c0c39a items=0 ppid=4109 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:18:15.339000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:18:15.339000 audit[4109]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000c310c0 a2=0 a3=0 items=0 ppid=4085 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.339000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:18:15.399000 audit[4299]: NETFILTER_CFG table=mangle:119 family=2 entries=16 op=nft_register_chain pid=4299 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:15.399000 audit[4299]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe044b6130 a2=0 a3=7ffe044b611c items=0 ppid=4109 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.399000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:15.404000 audit[4302]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=4302 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:15.404000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffda4a54700 a2=0 a3=7ffda4a546ec items=0 ppid=4109 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.404000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:15.407000 audit[4298]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4298 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:15.407000 audit[4298]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff565fc960 a2=0 a3=7fff565fc94c items=0 ppid=4109 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.407000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:15.409000 audit[4301]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4301 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:15.409000 audit[4301]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe750d0320 a2=0 a3=7ffe750d030c items=0 ppid=4109 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:15.409000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:15.486885 systemd-networkd[1519]: cali038f11b351e: Gained IPv6LL Dec 16 03:18:15.534996 containerd[1622]: time="2025-12-16T03:18:15.534506104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:15.538965 containerd[1622]: time="2025-12-16T03:18:15.538775060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:18:15.538965 containerd[1622]: time="2025-12-16T03:18:15.538839915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:15.539141 kubelet[2859]: E1216 03:18:15.539087 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:18:15.539197 kubelet[2859]: E1216 03:18:15.539153 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:18:15.539379 kubelet[2859]: E1216 03:18:15.539289 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx6vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d79fbfdc-v5h7c_calico-system(ff2dd774-9ae6-475f-a701-99b3146ecb38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:15.541369 kubelet[2859]: E1216 03:18:15.541158 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:18:16.189292 systemd-networkd[1519]: vxlan.calico: Gained IPv6LL Dec 16 03:18:16.286189 containerd[1622]: time="2025-12-16T03:18:16.285862682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f875f69d-gkvpq,Uid:b257371e-206f-4ea1-9c38-a798a97242ea,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:18:16.291701 containerd[1622]: time="2025-12-16T03:18:16.287053564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d5cb44968-bmmvj,Uid:04744c4e-2de2-49b5-8dbb-8f920ac6f068,Namespace:calico-system,Attempt:0,}" Dec 16 03:18:16.492208 kubelet[2859]: E1216 03:18:16.490719 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:18:16.575691 systemd-networkd[1519]: cali2c0421826f6: Link UP Dec 16 03:18:16.576223 systemd-networkd[1519]: cali2c0421826f6: Gained carrier Dec 16 03:18:16.588000 audit[4361]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:16.588000 audit[4361]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3950e9c0 a2=0 a3=7ffc3950e9ac items=0 ppid=3025 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:16.597000 audit[4361]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:16.597000 audit[4361]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc3950e9c0 a2=0 a3=0 items=0 ppid=3025 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:16.601757 containerd[1622]: 2025-12-16 03:18:16.377 [INFO][4313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0 calico-apiserver-84f875f69d- calico-apiserver b257371e-206f-4ea1-9c38-a798a97242ea 855 0 2025-12-16 03:17:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84f875f69d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84f875f69d-gkvpq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2c0421826f6 [] [] }} ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-gkvpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-" Dec 16 03:18:16.601757 containerd[1622]: 2025-12-16 03:18:16.377 [INFO][4313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-gkvpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" Dec 16 03:18:16.601757 containerd[1622]: 2025-12-16 03:18:16.424 [INFO][4340] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" HandleID="k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Workload="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.424 [INFO][4340] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" HandleID="k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Workload="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034b5a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84f875f69d-gkvpq", "timestamp":"2025-12-16 03:18:16.424601273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.424 [INFO][4340] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.424 [INFO][4340] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.425 [INFO][4340] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.442 [INFO][4340] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" host="localhost" Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.480 [INFO][4340] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.497 [INFO][4340] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.512 [INFO][4340] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.516 [INFO][4340] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:16.602000 containerd[1622]: 2025-12-16 03:18:16.516 [INFO][4340] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" host="localhost" Dec 16 03:18:16.602539 containerd[1622]: 2025-12-16 03:18:16.523 [INFO][4340] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619 Dec 16 03:18:16.602539 containerd[1622]: 2025-12-16 03:18:16.548 [INFO][4340] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" host="localhost" Dec 16 03:18:16.602539 containerd[1622]: 2025-12-16 03:18:16.561 [INFO][4340] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" host="localhost" Dec 16 03:18:16.602539 containerd[1622]: 2025-12-16 03:18:16.561 [INFO][4340] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" host="localhost" Dec 16 03:18:16.602539 containerd[1622]: 2025-12-16 03:18:16.561 [INFO][4340] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:16.602539 containerd[1622]: 2025-12-16 03:18:16.561 [INFO][4340] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" HandleID="k8s-pod-network.06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Workload="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" Dec 16 03:18:16.602739 containerd[1622]: 2025-12-16 03:18:16.568 [INFO][4313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-gkvpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0", GenerateName:"calico-apiserver-84f875f69d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b257371e-206f-4ea1-9c38-a798a97242ea", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f875f69d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84f875f69d-gkvpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c0421826f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:16.602814 containerd[1622]: 2025-12-16 03:18:16.568 [INFO][4313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-gkvpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" Dec 16 03:18:16.602814 containerd[1622]: 2025-12-16 03:18:16.568 [INFO][4313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c0421826f6 ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-gkvpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" Dec 16 03:18:16.602814 containerd[1622]: 2025-12-16 03:18:16.576 [INFO][4313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-gkvpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" Dec 16 03:18:16.603947 containerd[1622]: 2025-12-16 03:18:16.576 [INFO][4313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-gkvpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0", GenerateName:"calico-apiserver-84f875f69d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b257371e-206f-4ea1-9c38-a798a97242ea", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f875f69d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619", Pod:"calico-apiserver-84f875f69d-gkvpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c0421826f6", MAC:"86:17:98:f7:27:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:16.604044 containerd[1622]: 2025-12-16 03:18:16.598 [INFO][4313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-gkvpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--gkvpq-eth0" Dec 16 03:18:16.621000 audit[4369]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=4369 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:16.621000 audit[4369]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffe827ab170 a2=0 a3=7ffe827ab15c items=0 ppid=4109 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.621000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:16.647169 containerd[1622]: time="2025-12-16T03:18:16.647062552Z" level=info msg="connecting to shim 06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619" address="unix:///run/containerd/s/39aa9971c634e3ab74ced322696870c64049ee19e36f668242ad389c7c975dfe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:16.665696 systemd-networkd[1519]: cali8b75ab9bbcd: Link UP Dec 16 03:18:16.667040 systemd-networkd[1519]: cali8b75ab9bbcd: Gained carrier Dec 16 03:18:16.707716 systemd[1]: Started cri-containerd-06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619.scope - libcontainer container 06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619. Dec 16 03:18:16.723142 containerd[1622]: 2025-12-16 03:18:16.390 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0 calico-kube-controllers-6d5cb44968- calico-system 04744c4e-2de2-49b5-8dbb-8f920ac6f068 858 0 2025-12-16 03:17:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d5cb44968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d5cb44968-bmmvj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8b75ab9bbcd [] [] }} ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Namespace="calico-system" Pod="calico-kube-controllers-6d5cb44968-bmmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-" Dec 16 03:18:16.723142 containerd[1622]: 2025-12-16 03:18:16.390 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Namespace="calico-system" Pod="calico-kube-controllers-6d5cb44968-bmmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" Dec 16 03:18:16.723142 containerd[1622]: 2025-12-16 03:18:16.434 [INFO][4348] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" HandleID="k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Workload="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.434 [INFO][4348] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" HandleID="k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Workload="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003555c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d5cb44968-bmmvj", "timestamp":"2025-12-16 03:18:16.434231821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.434 [INFO][4348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.561 [INFO][4348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.561 [INFO][4348] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.581 [INFO][4348] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" host="localhost" Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.598 [INFO][4348] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.611 [INFO][4348] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.615 [INFO][4348] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.621 [INFO][4348] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:16.723424 containerd[1622]: 2025-12-16 03:18:16.621 [INFO][4348] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" host="localhost" Dec 16 03:18:16.723768 containerd[1622]: 2025-12-16 03:18:16.626 [INFO][4348] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9 Dec 16 03:18:16.723768 containerd[1622]: 2025-12-16 03:18:16.641 [INFO][4348] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" host="localhost" Dec 16 03:18:16.723768 containerd[1622]: 2025-12-16 03:18:16.657 [INFO][4348] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" host="localhost" Dec 16 03:18:16.723768 containerd[1622]: 2025-12-16 03:18:16.657 [INFO][4348] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" host="localhost" Dec 16 03:18:16.723768 containerd[1622]: 2025-12-16 03:18:16.658 [INFO][4348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:16.723768 containerd[1622]: 2025-12-16 03:18:16.658 [INFO][4348] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" HandleID="k8s-pod-network.2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Workload="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" Dec 16 03:18:16.723887 containerd[1622]: 2025-12-16 03:18:16.662 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Namespace="calico-system" Pod="calico-kube-controllers-6d5cb44968-bmmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0", GenerateName:"calico-kube-controllers-6d5cb44968-", Namespace:"calico-system", SelfLink:"", UID:"04744c4e-2de2-49b5-8dbb-8f920ac6f068", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d5cb44968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d5cb44968-bmmvj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b75ab9bbcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:16.723945 containerd[1622]: 2025-12-16 03:18:16.662 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Namespace="calico-system" Pod="calico-kube-controllers-6d5cb44968-bmmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" Dec 16 03:18:16.723945 containerd[1622]: 2025-12-16 03:18:16.662 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b75ab9bbcd ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Namespace="calico-system" Pod="calico-kube-controllers-6d5cb44968-bmmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" Dec 16 03:18:16.723945 containerd[1622]: 2025-12-16 03:18:16.666 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Namespace="calico-system" Pod="calico-kube-controllers-6d5cb44968-bmmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" Dec 16 03:18:16.724057 containerd[1622]: 2025-12-16 03:18:16.670 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Namespace="calico-system" Pod="calico-kube-controllers-6d5cb44968-bmmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0", GenerateName:"calico-kube-controllers-6d5cb44968-", Namespace:"calico-system", SelfLink:"", UID:"04744c4e-2de2-49b5-8dbb-8f920ac6f068", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d5cb44968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9", Pod:"calico-kube-controllers-6d5cb44968-bmmvj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b75ab9bbcd", MAC:"8a:a4:8b:84:b3:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:16.724113 containerd[1622]: 2025-12-16 03:18:16.717 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" Namespace="calico-system" Pod="calico-kube-controllers-6d5cb44968-bmmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d5cb44968--bmmvj-eth0" Dec 16 03:18:16.737000 audit: BPF prog-id=209 op=LOAD Dec 16 03:18:16.738000 audit: BPF prog-id=210 op=LOAD Dec 16 03:18:16.738000 audit[4390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4378 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036653761636535613662663332383633643535313636613639393363 Dec 16 03:18:16.738000 audit: BPF prog-id=210 op=UNLOAD Dec 16 03:18:16.738000 audit[4390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036653761636535613662663332383633643535313636613639393363 Dec 16 03:18:16.738000 audit: BPF prog-id=211 op=LOAD Dec 16 03:18:16.738000 audit[4390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4378 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036653761636535613662663332383633643535313636613639393363 Dec 16 03:18:16.739000 audit: BPF prog-id=212 op=LOAD Dec 16 03:18:16.739000 audit[4390]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4378 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036653761636535613662663332383633643535313636613639393363 Dec 16 03:18:16.739000 audit: BPF prog-id=212 op=UNLOAD Dec 16 03:18:16.739000 audit[4390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036653761636535613662663332383633643535313636613639393363 Dec 16 03:18:16.739000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:18:16.739000 audit[4390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036653761636535613662663332383633643535313636613639393363 Dec 16 03:18:16.739000 audit: BPF prog-id=213 op=LOAD Dec 16 03:18:16.739000 audit[4390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4378 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036653761636535613662663332383633643535313636613639393363 Dec 16 03:18:16.741504 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:16.745000 audit[4416]: NETFILTER_CFG table=filter:126 family=2 entries=40 op=nft_register_chain pid=4416 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:16.745000 audit[4416]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffedabcbab0 a2=0 a3=7ffedabcba9c items=0 ppid=4109 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:16.745000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:16.844103 containerd[1622]: time="2025-12-16T03:18:16.844016615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f875f69d-gkvpq,Uid:b257371e-206f-4ea1-9c38-a798a97242ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"06e7ace5a6bf32863d55166a6993c242b5fd80b78d1c11bbc896e88c67dc3619\"" Dec 16 03:18:16.847503 containerd[1622]: time="2025-12-16T03:18:16.847458425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:17.250112 containerd[1622]: time="2025-12-16T03:18:17.250020194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:17.287741 containerd[1622]: time="2025-12-16T03:18:17.287651075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f875f69d-hv8xn,Uid:9063305a-9611-4527-b78f-50cf768d4751,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:18:17.288269 containerd[1622]: time="2025-12-16T03:18:17.287864517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689f849d56-bpgq2,Uid:8653bc32-7498-477b-85bb-6a749d44ed95,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:18:17.503223 containerd[1622]: time="2025-12-16T03:18:17.503044049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:17.503223 containerd[1622]: time="2025-12-16T03:18:17.503173670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:17.503428 kubelet[2859]: E1216 03:18:17.503364 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:17.503428 kubelet[2859]: E1216 03:18:17.503420 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:17.503870 kubelet[2859]: E1216 03:18:17.503570 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txz4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f875f69d-gkvpq_calico-apiserver(b257371e-206f-4ea1-9c38-a798a97242ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:17.504854 kubelet[2859]: E1216 03:18:17.504787 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:18:17.507505 kubelet[2859]: E1216 03:18:17.507442 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:18:17.943000 audit[4424]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:17.943000 audit[4424]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4d8dc6a0 a2=0 a3=7ffc4d8dc68c items=0 ppid=3025 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:17.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:17.953000 audit[4424]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:17.953000 audit[4424]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc4d8dc6a0 a2=0 a3=0 items=0 ppid=3025 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:17.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:18.024355 containerd[1622]: time="2025-12-16T03:18:18.024276162Z" level=info msg="connecting to shim 2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9" address="unix:///run/containerd/s/ade9d9cc4bcbd3ca20e87dc61075febe6608ac191dd4cfda59e8f364a6c2261f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:18.064781 systemd[1]: Started cri-containerd-2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9.scope - libcontainer container 2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9. Dec 16 03:18:18.084000 audit: BPF prog-id=214 op=LOAD Dec 16 03:18:18.085000 audit: BPF prog-id=215 op=LOAD Dec 16 03:18:18.085000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4434 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:18.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265313631653939313839626232613161613665663161306163626134 Dec 16 03:18:18.085000 audit: BPF prog-id=215 op=UNLOAD Dec 16 03:18:18.085000 audit[4445]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4434 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:18.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265313631653939313839626232613161613665663161306163626134 Dec 16 03:18:18.085000 audit: BPF prog-id=216 op=LOAD Dec 16 03:18:18.085000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4434 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:18.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265313631653939313839626232613161613665663161306163626134 Dec 16 03:18:18.085000 audit: BPF prog-id=217 op=LOAD Dec 16 03:18:18.085000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4434 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:18.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265313631653939313839626232613161613665663161306163626134 Dec 16 03:18:18.085000 audit: BPF prog-id=217 op=UNLOAD Dec 16 03:18:18.085000 audit[4445]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4434 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:18.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265313631653939313839626232613161613665663161306163626134 Dec 16 03:18:18.085000 audit: BPF prog-id=216 op=UNLOAD Dec 16 03:18:18.085000 audit[4445]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4434 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:18.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265313631653939313839626232613161613665663161306163626134 Dec 16 03:18:18.085000 audit: BPF prog-id=218 op=LOAD Dec 16 03:18:18.085000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4434 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:18.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265313631653939313839626232613161613665663161306163626134 Dec 16 03:18:18.088138 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:18.238139 systemd-networkd[1519]: cali2c0421826f6: Gained IPv6LL Dec 16 03:18:18.394763 kubelet[2859]: E1216 03:18:18.394694 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:18.403821 systemd-networkd[1519]: cali888081a8370: Link UP Dec 16 03:18:18.404345 systemd-networkd[1519]: cali888081a8370: Gained carrier Dec 16 03:18:18.411596 containerd[1622]: time="2025-12-16T03:18:18.411502618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7rq44,Uid:fd1a984e-51b4-4231-b86e-dec6833958fb,Namespace:kube-system,Attempt:0,}" Dec 16 03:18:18.431243 systemd-networkd[1519]: cali8b75ab9bbcd: Gained IPv6LL Dec 16 03:18:18.526911 kubelet[2859]: E1216 03:18:18.526859 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:18:18.665326 containerd[1622]: 2025-12-16 03:18:18.090 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0 calico-apiserver-84f875f69d- calico-apiserver 9063305a-9611-4527-b78f-50cf768d4751 851 0 2025-12-16 03:17:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84f875f69d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84f875f69d-hv8xn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali888081a8370 [] [] }} ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-hv8xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-" Dec 16 03:18:18.665326 containerd[1622]: 2025-12-16 03:18:18.090 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-hv8xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" Dec 16 03:18:18.665326 containerd[1622]: 2025-12-16 03:18:18.132 [INFO][4479] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" HandleID="k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Workload="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.134 [INFO][4479] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" HandleID="k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Workload="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84f875f69d-hv8xn", "timestamp":"2025-12-16 03:18:18.132600943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.135 [INFO][4479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.135 [INFO][4479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.135 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.148 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" host="localhost" Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.181 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.191 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.196 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.212 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:18.666573 containerd[1622]: 2025-12-16 03:18:18.212 [INFO][4479] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" host="localhost" Dec 16 03:18:18.673450 containerd[1622]: 2025-12-16 03:18:18.219 [INFO][4479] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f Dec 16 03:18:18.673450 containerd[1622]: 2025-12-16 03:18:18.232 [INFO][4479] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" host="localhost" Dec 16 03:18:18.673450 containerd[1622]: 2025-12-16 03:18:18.395 [INFO][4479] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" host="localhost" Dec 16 03:18:18.673450 containerd[1622]: 2025-12-16 03:18:18.396 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" host="localhost" Dec 16 03:18:18.673450 containerd[1622]: 2025-12-16 03:18:18.396 [INFO][4479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:18.673450 containerd[1622]: 2025-12-16 03:18:18.396 [INFO][4479] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" HandleID="k8s-pod-network.df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Workload="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" Dec 16 03:18:18.676594 containerd[1622]: 2025-12-16 03:18:18.400 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-hv8xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0", GenerateName:"calico-apiserver-84f875f69d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9063305a-9611-4527-b78f-50cf768d4751", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f875f69d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84f875f69d-hv8xn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali888081a8370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:18.676739 containerd[1622]: 2025-12-16 03:18:18.401 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-hv8xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" Dec 16 03:18:18.676739 containerd[1622]: 2025-12-16 03:18:18.401 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali888081a8370 ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-hv8xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" Dec 16 03:18:18.676739 containerd[1622]: 2025-12-16 03:18:18.404 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-hv8xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" Dec 16 03:18:18.676858 containerd[1622]: 2025-12-16 03:18:18.408 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-hv8xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0", GenerateName:"calico-apiserver-84f875f69d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9063305a-9611-4527-b78f-50cf768d4751", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f875f69d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f", Pod:"calico-apiserver-84f875f69d-hv8xn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali888081a8370", MAC:"22:d5:b5:84:92:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:18.676941 containerd[1622]: 2025-12-16 03:18:18.647 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" Namespace="calico-apiserver" Pod="calico-apiserver-84f875f69d-hv8xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f875f69d--hv8xn-eth0" Dec 16 03:18:18.706000 audit[4517]: NETFILTER_CFG table=filter:129 family=2 entries=51 op=nft_register_chain pid=4517 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:18.706000 audit[4517]: SYSCALL arch=c000003e syscall=46 success=yes exit=27116 a0=3 a1=7ffc8daa05f0 a2=0 a3=7ffc8daa05dc items=0 ppid=4109 pid=4517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:18.706000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:18.720115 containerd[1622]: time="2025-12-16T03:18:18.719844728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d5cb44968-bmmvj,Uid:04744c4e-2de2-49b5-8dbb-8f920ac6f068,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e161e99189bb2a1aa6ef1a0acba49313a44ed0493aa9bb3a1cddd5f44dff4a9\"" Dec 16 03:18:18.722841 containerd[1622]: time="2025-12-16T03:18:18.722441277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:18:18.930546 containerd[1622]: time="2025-12-16T03:18:18.927740473Z" level=info msg="connecting to shim df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f" address="unix:///run/containerd/s/e5e61ec95a7c751bbc7ac310f4ff5bba39680947f37a1cb8aa4b8cfb9c0f9833" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:19.006471 systemd[1]: Started cri-containerd-df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f.scope - libcontainer container df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f. Dec 16 03:18:19.045000 audit: BPF prog-id=219 op=LOAD Dec 16 03:18:19.046000 audit: BPF prog-id=220 op=LOAD Dec 16 03:18:19.046000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4547 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466376264393264393862643462653736613136356635653332666461 Dec 16 03:18:19.046000 audit: BPF prog-id=220 op=UNLOAD Dec 16 03:18:19.046000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4547 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466376264393264393862643462653736613136356635653332666461 Dec 16 03:18:19.046000 audit: BPF prog-id=221 op=LOAD Dec 16 03:18:19.046000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4547 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466376264393264393862643462653736613136356635653332666461 Dec 16 03:18:19.053000 audit: BPF prog-id=222 op=LOAD Dec 16 03:18:19.053000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4547 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466376264393264393862643462653736613136356635653332666461 Dec 16 03:18:19.053000 audit: BPF prog-id=222 op=UNLOAD Dec 16 03:18:19.053000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4547 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466376264393264393862643462653736613136356635653332666461 Dec 16 03:18:19.055000 audit: BPF prog-id=221 op=UNLOAD Dec 16 03:18:19.055000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4547 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466376264393264393862643462653736613136356635653332666461 Dec 16 03:18:19.055000 audit: BPF prog-id=223 op=LOAD Dec 16 03:18:19.055000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4547 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466376264393264393862643462653736613136356635653332666461 Dec 16 03:18:19.070697 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:19.116869 systemd-networkd[1519]: cali33601af3ad4: Link UP Dec 16 03:18:19.119265 systemd-networkd[1519]: cali33601af3ad4: Gained carrier Dec 16 03:18:19.127214 containerd[1622]: time="2025-12-16T03:18:19.126811892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:19.146788 containerd[1622]: time="2025-12-16T03:18:19.146109090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:19.150656 containerd[1622]: time="2025-12-16T03:18:19.146312102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:18:19.150979 kubelet[2859]: E1216 03:18:19.150851 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:18:19.150979 kubelet[2859]: E1216 03:18:19.150913 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:18:19.158813 kubelet[2859]: E1216 03:18:19.158720 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lg9dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d5cb44968-bmmvj_calico-system(04744c4e-2de2-49b5-8dbb-8f920ac6f068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:19.161104 kubelet[2859]: E1216 03:18:19.161044 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:18:19.195058 containerd[1622]: 2025-12-16 03:18:18.408 [INFO][4494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0 calico-apiserver-689f849d56- calico-apiserver 8653bc32-7498-477b-85bb-6a749d44ed95 854 0 2025-12-16 03:17:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:689f849d56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-689f849d56-bpgq2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali33601af3ad4 [] [] }} ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Namespace="calico-apiserver" Pod="calico-apiserver-689f849d56-bpgq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--689f849d56--bpgq2-" Dec 16 03:18:19.195058 containerd[1622]: 2025-12-16 03:18:18.408 [INFO][4494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Namespace="calico-apiserver" Pod="calico-apiserver-689f849d56-bpgq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" Dec 16 03:18:19.195058 containerd[1622]: 2025-12-16 03:18:18.726 [INFO][4511] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" HandleID="k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Workload="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.726 [INFO][4511] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" HandleID="k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Workload="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb7a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-689f849d56-bpgq2", "timestamp":"2025-12-16 03:18:18.726654454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.727 [INFO][4511] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.727 [INFO][4511] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.727 [INFO][4511] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.806 [INFO][4511] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" host="localhost" Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.861 [INFO][4511] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.911 [INFO][4511] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.917 [INFO][4511] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.942 [INFO][4511] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:19.195360 containerd[1622]: 2025-12-16 03:18:18.942 [INFO][4511] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" host="localhost" Dec 16 03:18:19.195694 containerd[1622]: 2025-12-16 03:18:18.955 [INFO][4511] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38 Dec 16 03:18:19.195694 containerd[1622]: 2025-12-16 03:18:18.985 [INFO][4511] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" host="localhost" Dec 16 03:18:19.195694 containerd[1622]: 2025-12-16 03:18:19.074 [INFO][4511] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" host="localhost" Dec 16 03:18:19.195694 containerd[1622]: 2025-12-16 03:18:19.074 [INFO][4511] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" host="localhost" Dec 16 03:18:19.195694 containerd[1622]: 2025-12-16 03:18:19.074 [INFO][4511] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:19.195694 containerd[1622]: 2025-12-16 03:18:19.075 [INFO][4511] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" HandleID="k8s-pod-network.3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Workload="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" Dec 16 03:18:19.204375 containerd[1622]: 2025-12-16 03:18:19.098 [INFO][4494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Namespace="calico-apiserver" Pod="calico-apiserver-689f849d56-bpgq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0", GenerateName:"calico-apiserver-689f849d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"8653bc32-7498-477b-85bb-6a749d44ed95", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689f849d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-689f849d56-bpgq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33601af3ad4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:19.204490 containerd[1622]: 2025-12-16 03:18:19.098 [INFO][4494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Namespace="calico-apiserver" Pod="calico-apiserver-689f849d56-bpgq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" Dec 16 03:18:19.204490 containerd[1622]: 2025-12-16 03:18:19.098 [INFO][4494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33601af3ad4 ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Namespace="calico-apiserver" Pod="calico-apiserver-689f849d56-bpgq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" Dec 16 03:18:19.204490 containerd[1622]: 2025-12-16 03:18:19.118 [INFO][4494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Namespace="calico-apiserver" Pod="calico-apiserver-689f849d56-bpgq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" Dec 16 03:18:19.204597 containerd[1622]: 2025-12-16 03:18:19.130 [INFO][4494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Namespace="calico-apiserver" Pod="calico-apiserver-689f849d56-bpgq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0", GenerateName:"calico-apiserver-689f849d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"8653bc32-7498-477b-85bb-6a749d44ed95", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"689f849d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38", Pod:"calico-apiserver-689f849d56-bpgq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33601af3ad4", MAC:"86:0e:2a:dc:79:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:19.204666 containerd[1622]: 2025-12-16 03:18:19.172 [INFO][4494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" Namespace="calico-apiserver" Pod="calico-apiserver-689f849d56-bpgq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--689f849d56--bpgq2-eth0" Dec 16 03:18:19.254000 audit[4599]: NETFILTER_CFG table=filter:130 family=2 entries=45 op=nft_register_chain pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:19.254000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=24248 a0=3 a1=7ffee7eb6c80 a2=0 a3=7ffee7eb6c6c items=0 ppid=4109 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.254000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:19.288121 kubelet[2859]: E1216 03:18:19.288081 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:19.293888 containerd[1622]: time="2025-12-16T03:18:19.293844653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4q4f,Uid:e0e61ffb-fcc3-4b0c-98ff-1e34abeff548,Namespace:kube-system,Attempt:0,}" Dec 16 03:18:19.353230 containerd[1622]: time="2025-12-16T03:18:19.353105458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f875f69d-hv8xn,Uid:9063305a-9611-4527-b78f-50cf768d4751,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"df7bd92d98bd4be76a165f5e32fda42123ae3ad1a24e5b96c34e564b2eb0ba5f\"" Dec 16 03:18:19.357468 containerd[1622]: time="2025-12-16T03:18:19.357418542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:19.536148 kubelet[2859]: E1216 03:18:19.535847 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:18:19.574986 systemd-networkd[1519]: cali751d49bde02: Link UP Dec 16 03:18:19.576845 systemd-networkd[1519]: cali751d49bde02: Gained carrier Dec 16 03:18:19.702126 containerd[1622]: time="2025-12-16T03:18:19.700338614Z" level=info msg="connecting to shim 3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38" address="unix:///run/containerd/s/27db434b7dfaf5e5c10a59a208a5e3ef436cd3cd9a36573ade8c3e8c48a8b481" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:19.706915 containerd[1622]: 2025-12-16 03:18:18.928 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7rq44-eth0 coredns-674b8bbfcf- kube-system fd1a984e-51b4-4231-b86e-dec6833958fb 857 0 2025-12-16 03:17:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7rq44 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali751d49bde02 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7rq44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7rq44-" Dec 16 03:18:19.706915 containerd[1622]: 2025-12-16 03:18:18.929 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7rq44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" Dec 16 03:18:19.706915 containerd[1622]: 2025-12-16 03:18:19.124 [INFO][4571] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" HandleID="k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Workload="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.125 [INFO][4571] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" HandleID="k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Workload="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004845d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7rq44", "timestamp":"2025-12-16 03:18:19.124819081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.125 [INFO][4571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.125 [INFO][4571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.125 [INFO][4571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.155 [INFO][4571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" host="localhost" Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.225 [INFO][4571] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.393 [INFO][4571] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.429 [INFO][4571] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.435 [INFO][4571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:19.707519 containerd[1622]: 2025-12-16 03:18:19.435 [INFO][4571] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" host="localhost" Dec 16 03:18:19.709298 containerd[1622]: 2025-12-16 03:18:19.446 [INFO][4571] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d Dec 16 03:18:19.709298 containerd[1622]: 2025-12-16 03:18:19.501 [INFO][4571] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" host="localhost" Dec 16 03:18:19.709298 containerd[1622]: 2025-12-16 03:18:19.553 [INFO][4571] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" host="localhost" Dec 16 03:18:19.709298 containerd[1622]: 2025-12-16 03:18:19.553 [INFO][4571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" host="localhost" Dec 16 03:18:19.709298 containerd[1622]: 2025-12-16 03:18:19.553 [INFO][4571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:19.709298 containerd[1622]: 2025-12-16 03:18:19.553 [INFO][4571] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" HandleID="k8s-pod-network.0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Workload="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" Dec 16 03:18:19.709482 containerd[1622]: 2025-12-16 03:18:19.563 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7rq44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7rq44-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd1a984e-51b4-4231-b86e-dec6833958fb", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7rq44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali751d49bde02", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:19.709569 containerd[1622]: 2025-12-16 03:18:19.563 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7rq44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" Dec 16 03:18:19.709569 containerd[1622]: 2025-12-16 03:18:19.563 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali751d49bde02 ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7rq44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" Dec 16 03:18:19.709569 containerd[1622]: 2025-12-16 03:18:19.575 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7rq44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" Dec 16 03:18:19.709676 containerd[1622]: 2025-12-16 03:18:19.576 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7rq44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7rq44-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd1a984e-51b4-4231-b86e-dec6833958fb", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d", Pod:"coredns-674b8bbfcf-7rq44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali751d49bde02", MAC:"56:3c:d2:1b:d9:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:19.709676 containerd[1622]: 2025-12-16 03:18:19.691 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7rq44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7rq44-eth0" Dec 16 03:18:19.751000 audit[4657]: NETFILTER_CFG table=filter:131 family=2 entries=54 op=nft_register_chain pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:19.774658 kernel: kauditd_printk_skb: 278 callbacks suppressed Dec 16 03:18:19.774831 kernel: audit: type=1325 audit(1765855099.751:674): table=filter:131 family=2 entries=54 op=nft_register_chain pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:19.751000 audit[4657]: SYSCALL arch=c000003e syscall=46 success=yes exit=26100 a0=3 a1=7ffc0e4a33b0 a2=0 a3=7ffc0e4a339c items=0 ppid=4109 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.788156 kernel: audit: type=1300 audit(1765855099.751:674): arch=c000003e syscall=46 success=yes exit=26100 a0=3 a1=7ffc0e4a33b0 a2=0 a3=7ffc0e4a339c items=0 ppid=4109 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.788308 kernel: audit: type=1327 audit(1765855099.751:674): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:19.751000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:19.824874 systemd[1]: Started cri-containerd-3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38.scope - libcontainer container 3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38. Dec 16 03:18:19.847000 audit: BPF prog-id=224 op=LOAD Dec 16 03:18:19.851944 kernel: audit: type=1334 audit(1765855099.847:675): prog-id=224 op=LOAD Dec 16 03:18:19.852114 kernel: audit: type=1334 audit(1765855099.848:676): prog-id=225 op=LOAD Dec 16 03:18:19.848000 audit: BPF prog-id=225 op=LOAD Dec 16 03:18:19.848000 audit[4644]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.861872 containerd[1622]: time="2025-12-16T03:18:19.856075821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:19.861991 kernel: audit: type=1300 audit(1765855099.848:676): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.852532 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:19.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.904126 kernel: audit: type=1327 audit(1765855099.848:676): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.848000 audit: BPF prog-id=225 op=UNLOAD Dec 16 03:18:19.848000 audit[4644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.935507 kernel: audit: type=1334 audit(1765855099.848:677): prog-id=225 op=UNLOAD Dec 16 03:18:19.935666 kernel: audit: type=1300 audit(1765855099.848:677): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.938872 kernel: audit: type=1327 audit(1765855099.848:677): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.848000 audit: BPF prog-id=226 op=LOAD Dec 16 03:18:19.848000 audit[4644]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.848000 audit: BPF prog-id=227 op=LOAD Dec 16 03:18:19.848000 audit[4644]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.848000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:18:19.848000 audit[4644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.848000 audit: BPF prog-id=226 op=UNLOAD Dec 16 03:18:19.848000 audit[4644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.848000 audit: BPF prog-id=228 op=LOAD Dec 16 03:18:19.848000 audit[4644]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4625 pid=4644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:19.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343164626338353838626130616665633234643537386462616631 Dec 16 03:18:19.970551 containerd[1622]: time="2025-12-16T03:18:19.970181157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:19.970551 containerd[1622]: time="2025-12-16T03:18:19.970272874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:19.971124 kubelet[2859]: E1216 03:18:19.971067 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:19.971253 kubelet[2859]: E1216 03:18:19.971138 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:19.971458 kubelet[2859]: E1216 03:18:19.971323 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dtnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f875f69d-hv8xn_calico-apiserver(9063305a-9611-4527-b78f-50cf768d4751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:19.974300 kubelet[2859]: E1216 03:18:19.972820 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:18:19.996813 containerd[1622]: time="2025-12-16T03:18:19.996125899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-689f849d56-bpgq2,Uid:8653bc32-7498-477b-85bb-6a749d44ed95,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3f41dbc8588ba0afec24d578dbaf1d94ce458447ee06a5ca172502fc0b294a38\"" Dec 16 03:18:19.997741 containerd[1622]: time="2025-12-16T03:18:19.997704282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:20.041130 containerd[1622]: time="2025-12-16T03:18:20.040883281Z" level=info msg="connecting to shim 0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d" address="unix:///run/containerd/s/a11fc28f552f88667b56d01e6c26e5925d4ca310b514a3ff0850e634e98adac0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:20.073699 systemd-networkd[1519]: cali808e5270d56: Link UP Dec 16 03:18:20.075535 systemd-networkd[1519]: cali808e5270d56: Gained carrier Dec 16 03:18:20.095350 systemd[1]: Started cri-containerd-0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d.scope - libcontainer container 0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d. Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:19.734 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0 coredns-674b8bbfcf- kube-system e0e61ffb-fcc3-4b0c-98ff-1e34abeff548 856 0 2025-12-16 03:17:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-m4q4f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali808e5270d56 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4q4f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m4q4f-" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:19.736 [INFO][4603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4q4f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:19.917 [INFO][4666] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" HandleID="k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Workload="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:19.918 [INFO][4666] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" HandleID="k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Workload="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0d20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-m4q4f", "timestamp":"2025-12-16 03:18:19.917857094 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:19.918 [INFO][4666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:19.918 [INFO][4666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:19.918 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:19.975 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.006 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.019 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.024 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.031 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.031 [INFO][4666] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.035 [INFO][4666] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83 Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.044 [INFO][4666] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.054 [INFO][4666] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.056 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" host="localhost" Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.056 [INFO][4666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:20.114996 containerd[1622]: 2025-12-16 03:18:20.056 [INFO][4666] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" HandleID="k8s-pod-network.5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Workload="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" Dec 16 03:18:20.115794 containerd[1622]: 2025-12-16 03:18:20.062 [INFO][4603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4q4f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0e61ffb-fcc3-4b0c-98ff-1e34abeff548", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-m4q4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali808e5270d56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:20.115794 containerd[1622]: 2025-12-16 03:18:20.062 [INFO][4603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4q4f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" Dec 16 03:18:20.115794 containerd[1622]: 2025-12-16 03:18:20.062 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali808e5270d56 ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4q4f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" Dec 16 03:18:20.115794 containerd[1622]: 2025-12-16 03:18:20.076 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4q4f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" Dec 16 03:18:20.115794 containerd[1622]: 2025-12-16 03:18:20.077 [INFO][4603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4q4f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0e61ffb-fcc3-4b0c-98ff-1e34abeff548", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83", Pod:"coredns-674b8bbfcf-m4q4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali808e5270d56", MAC:"9a:75:cb:f3:f5:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:20.115794 containerd[1622]: 2025-12-16 03:18:20.110 [INFO][4603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4q4f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m4q4f-eth0" Dec 16 03:18:20.123000 audit: BPF prog-id=229 op=LOAD Dec 16 03:18:20.124000 audit: BPF prog-id=230 op=LOAD Dec 16 03:18:20.124000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063616631643163366137383364316262326164666139636666383363 Dec 16 03:18:20.125000 audit: BPF prog-id=230 op=UNLOAD Dec 16 03:18:20.125000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063616631643163366137383364316262326164666139636666383363 Dec 16 03:18:20.125000 audit: BPF prog-id=231 op=LOAD Dec 16 03:18:20.125000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063616631643163366137383364316262326164666139636666383363 Dec 16 03:18:20.127000 audit: BPF prog-id=232 op=LOAD Dec 16 03:18:20.127000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063616631643163366137383364316262326164666139636666383363 Dec 16 03:18:20.127000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:18:20.127000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063616631643163366137383364316262326164666139636666383363 Dec 16 03:18:20.127000 audit: BPF prog-id=231 op=UNLOAD Dec 16 03:18:20.127000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063616631643163366137383364316262326164666139636666383363 Dec 16 03:18:20.127000 audit: BPF prog-id=233 op=LOAD Dec 16 03:18:20.127000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063616631643163366137383364316262326164666139636666383363 Dec 16 03:18:20.130616 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:20.151000 audit[4730]: NETFILTER_CFG table=filter:132 family=2 entries=36 op=nft_register_chain pid=4730 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:20.151000 audit[4730]: SYSCALL arch=c000003e syscall=46 success=yes exit=19176 a0=3 a1=7ffe0fb449a0 a2=0 a3=7ffe0fb4498c items=0 ppid=4109 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.151000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:20.192092 containerd[1622]: time="2025-12-16T03:18:20.192013644Z" level=info msg="connecting to shim 5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83" address="unix:///run/containerd/s/282bfdcc9eb8b69eb8042b3259684fc319058a96bae8a8432cd8cf50152e428b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:20.214349 containerd[1622]: time="2025-12-16T03:18:20.214203918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7rq44,Uid:fd1a984e-51b4-4231-b86e-dec6833958fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d\"" Dec 16 03:18:20.224656 kubelet[2859]: E1216 03:18:20.222896 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:20.224220 systemd-networkd[1519]: cali888081a8370: Gained IPv6LL Dec 16 03:18:20.252989 containerd[1622]: time="2025-12-16T03:18:20.248718661Z" level=info msg="CreateContainer within sandbox \"0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:18:20.275433 systemd[1]: Started cri-containerd-5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83.scope - libcontainer container 5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83. Dec 16 03:18:20.287286 containerd[1622]: time="2025-12-16T03:18:20.286624616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p7kjv,Uid:296bf1a5-e8cc-4c0a-a28b-66e55ce1e788,Namespace:calico-system,Attempt:0,}" Dec 16 03:18:20.287632 containerd[1622]: time="2025-12-16T03:18:20.287572533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fml4k,Uid:10d0064d-4592-4240-a31b-0b629652a239,Namespace:calico-system,Attempt:0,}" Dec 16 03:18:20.310000 audit: BPF prog-id=234 op=LOAD Dec 16 03:18:20.310000 audit: BPF prog-id=235 op=LOAD Dec 16 03:18:20.310000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562303466393461623838623530613731633135333136306332386138 Dec 16 03:18:20.310000 audit: BPF prog-id=235 op=UNLOAD Dec 16 03:18:20.310000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562303466393461623838623530613731633135333136306332386138 Dec 16 03:18:20.310000 audit: BPF prog-id=236 op=LOAD Dec 16 03:18:20.310000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562303466393461623838623530613731633135333136306332386138 Dec 16 03:18:20.310000 audit: BPF prog-id=237 op=LOAD Dec 16 03:18:20.310000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562303466393461623838623530613731633135333136306332386138 Dec 16 03:18:20.310000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:18:20.310000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562303466393461623838623530613731633135333136306332386138 Dec 16 03:18:20.310000 audit: BPF prog-id=236 op=UNLOAD Dec 16 03:18:20.310000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562303466393461623838623530613731633135333136306332386138 Dec 16 03:18:20.310000 audit: BPF prog-id=238 op=LOAD Dec 16 03:18:20.310000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4746 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562303466393461623838623530613731633135333136306332386138 Dec 16 03:18:20.328796 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:20.329574 containerd[1622]: time="2025-12-16T03:18:20.329451781Z" level=info msg="Container 469959ca26e44f3070aed03eb7b5d16e7bb76c37e7fa19b40e32c42684817c13: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:18:20.341226 containerd[1622]: time="2025-12-16T03:18:20.341144365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:20.356232 containerd[1622]: time="2025-12-16T03:18:20.356171882Z" level=info msg="CreateContainer within sandbox \"0caf1d1c6a783d1bb2adfa9cff83cf52abd0cdf25db9f366f1455b2166fa823d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"469959ca26e44f3070aed03eb7b5d16e7bb76c37e7fa19b40e32c42684817c13\"" Dec 16 03:18:20.358375 containerd[1622]: time="2025-12-16T03:18:20.358027966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:20.359120 containerd[1622]: time="2025-12-16T03:18:20.358988867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:20.359681 kubelet[2859]: E1216 03:18:20.359641 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:20.359762 kubelet[2859]: E1216 03:18:20.359688 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:20.359862 kubelet[2859]: E1216 03:18:20.359820 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhhvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-689f849d56-bpgq2_calico-apiserver(8653bc32-7498-477b-85bb-6a749d44ed95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:20.360658 containerd[1622]: time="2025-12-16T03:18:20.360612665Z" level=info msg="StartContainer for \"469959ca26e44f3070aed03eb7b5d16e7bb76c37e7fa19b40e32c42684817c13\"" Dec 16 03:18:20.361174 kubelet[2859]: E1216 03:18:20.361123 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:18:20.364909 containerd[1622]: time="2025-12-16T03:18:20.364549729Z" level=info msg="connecting to shim 469959ca26e44f3070aed03eb7b5d16e7bb76c37e7fa19b40e32c42684817c13" address="unix:///run/containerd/s/a11fc28f552f88667b56d01e6c26e5925d4ca310b514a3ff0850e634e98adac0" protocol=ttrpc version=3 Dec 16 03:18:20.417732 containerd[1622]: time="2025-12-16T03:18:20.417492849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4q4f,Uid:e0e61ffb-fcc3-4b0c-98ff-1e34abeff548,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83\"" Dec 16 03:18:20.423675 kubelet[2859]: E1216 03:18:20.423587 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:20.426276 systemd[1]: Started cri-containerd-469959ca26e44f3070aed03eb7b5d16e7bb76c37e7fa19b40e32c42684817c13.scope - libcontainer container 469959ca26e44f3070aed03eb7b5d16e7bb76c37e7fa19b40e32c42684817c13. Dec 16 03:18:20.430894 containerd[1622]: time="2025-12-16T03:18:20.430517228Z" level=info msg="CreateContainer within sandbox \"5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:18:20.471028 containerd[1622]: time="2025-12-16T03:18:20.470643030Z" level=info msg="Container e9ac32219686734d056d9c2ff81de92309e37768a8e321bbc2bdffeb2a884a66: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:18:20.471000 audit: BPF prog-id=239 op=LOAD Dec 16 03:18:20.473000 audit: BPF prog-id=240 op=LOAD Dec 16 03:18:20.473000 audit[4806]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4690 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393935396361323665343466333037306165643033656237623564 Dec 16 03:18:20.473000 audit: BPF prog-id=240 op=UNLOAD Dec 16 03:18:20.473000 audit[4806]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393935396361323665343466333037306165643033656237623564 Dec 16 03:18:20.473000 audit: BPF prog-id=241 op=LOAD Dec 16 03:18:20.473000 audit[4806]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4690 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393935396361323665343466333037306165643033656237623564 Dec 16 03:18:20.474000 audit: BPF prog-id=242 op=LOAD Dec 16 03:18:20.474000 audit[4806]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4690 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393935396361323665343466333037306165643033656237623564 Dec 16 03:18:20.474000 audit: BPF prog-id=242 op=UNLOAD Dec 16 03:18:20.474000 audit[4806]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393935396361323665343466333037306165643033656237623564 Dec 16 03:18:20.474000 audit: BPF prog-id=241 op=UNLOAD Dec 16 03:18:20.474000 audit[4806]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393935396361323665343466333037306165643033656237623564 Dec 16 03:18:20.474000 audit: BPF prog-id=243 op=LOAD Dec 16 03:18:20.474000 audit[4806]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4690 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393935396361323665343466333037306165643033656237623564 Dec 16 03:18:20.497672 containerd[1622]: time="2025-12-16T03:18:20.495454041Z" level=info msg="CreateContainer within sandbox \"5b04f94ab88b50a71c153160c28a8987ca64e89d820169b3f4d3e84ff019ee83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9ac32219686734d056d9c2ff81de92309e37768a8e321bbc2bdffeb2a884a66\"" Dec 16 03:18:20.500454 containerd[1622]: time="2025-12-16T03:18:20.500304425Z" level=info msg="StartContainer for \"e9ac32219686734d056d9c2ff81de92309e37768a8e321bbc2bdffeb2a884a66\"" Dec 16 03:18:20.505457 containerd[1622]: time="2025-12-16T03:18:20.505382405Z" level=info msg="connecting to shim e9ac32219686734d056d9c2ff81de92309e37768a8e321bbc2bdffeb2a884a66" address="unix:///run/containerd/s/282bfdcc9eb8b69eb8042b3259684fc319058a96bae8a8432cd8cf50152e428b" protocol=ttrpc version=3 Dec 16 03:18:20.564824 kubelet[2859]: E1216 03:18:20.564729 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:18:20.570061 kubelet[2859]: E1216 03:18:20.566087 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:18:20.570061 kubelet[2859]: E1216 03:18:20.567749 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:18:20.596506 containerd[1622]: time="2025-12-16T03:18:20.596420326Z" level=info msg="StartContainer for \"469959ca26e44f3070aed03eb7b5d16e7bb76c37e7fa19b40e32c42684817c13\" returns successfully" Dec 16 03:18:20.596626 systemd[1]: Started cri-containerd-e9ac32219686734d056d9c2ff81de92309e37768a8e321bbc2bdffeb2a884a66.scope - libcontainer container e9ac32219686734d056d9c2ff81de92309e37768a8e321bbc2bdffeb2a884a66. Dec 16 03:18:20.626000 audit: BPF prog-id=244 op=LOAD Dec 16 03:18:20.627000 audit: BPF prog-id=245 op=LOAD Dec 16 03:18:20.627000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4746 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616333323231393638363733346430353664396332666638316465 Dec 16 03:18:20.628000 audit: BPF prog-id=245 op=UNLOAD Dec 16 03:18:20.628000 audit[4864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616333323231393638363733346430353664396332666638316465 Dec 16 03:18:20.628000 audit: BPF prog-id=246 op=LOAD Dec 16 03:18:20.628000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4746 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616333323231393638363733346430353664396332666638316465 Dec 16 03:18:20.628000 audit: BPF prog-id=247 op=LOAD Dec 16 03:18:20.628000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4746 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616333323231393638363733346430353664396332666638316465 Dec 16 03:18:20.628000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:18:20.628000 audit[4864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616333323231393638363733346430353664396332666638316465 Dec 16 03:18:20.628000 audit: BPF prog-id=246 op=UNLOAD Dec 16 03:18:20.628000 audit[4864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616333323231393638363733346430353664396332666638316465 Dec 16 03:18:20.628000 audit: BPF prog-id=248 op=LOAD Dec 16 03:18:20.628000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4746 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539616333323231393638363733346430353664396332666638316465 Dec 16 03:18:20.684213 containerd[1622]: time="2025-12-16T03:18:20.684130236Z" level=info msg="StartContainer for \"e9ac32219686734d056d9c2ff81de92309e37768a8e321bbc2bdffeb2a884a66\" returns successfully" Dec 16 03:18:20.700202 systemd-networkd[1519]: cali9ac41687b95: Link UP Dec 16 03:18:20.702480 systemd-networkd[1519]: cali9ac41687b95: Gained carrier Dec 16 03:18:20.711000 audit[4893]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:20.711000 audit[4893]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc72bc8aa0 a2=0 a3=7ffc72bc8a8c items=0 ppid=3025 pid=4893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:20.719000 audit[4893]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:20.719000 audit[4893]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc72bc8aa0 a2=0 a3=0 items=0 ppid=3025 pid=4893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.483 [INFO][4791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--fml4k-eth0 goldmane-666569f655- calico-system 10d0064d-4592-4240-a31b-0b629652a239 852 0 2025-12-16 03:17:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-fml4k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9ac41687b95 [] [] }} ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Namespace="calico-system" Pod="goldmane-666569f655-fml4k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fml4k-" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.484 [INFO][4791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Namespace="calico-system" Pod="goldmane-666569f655-fml4k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fml4k-eth0" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.538 [INFO][4849] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" HandleID="k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Workload="localhost-k8s-goldmane--666569f655--fml4k-eth0" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.538 [INFO][4849] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" HandleID="k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Workload="localhost-k8s-goldmane--666569f655--fml4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-fml4k", "timestamp":"2025-12-16 03:18:20.538653874 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.538 [INFO][4849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.538 [INFO][4849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.539 [INFO][4849] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.551 [INFO][4849] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.573 [INFO][4849] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.609 [INFO][4849] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.624 [INFO][4849] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.635 [INFO][4849] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.635 [INFO][4849] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.640 [INFO][4849] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.656 [INFO][4849] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.691 [INFO][4849] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.691 [INFO][4849] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" host="localhost" Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.691 [INFO][4849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:20.757754 containerd[1622]: 2025-12-16 03:18:20.691 [INFO][4849] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" HandleID="k8s-pod-network.dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Workload="localhost-k8s-goldmane--666569f655--fml4k-eth0" Dec 16 03:18:20.762467 containerd[1622]: 2025-12-16 03:18:20.695 [INFO][4791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Namespace="calico-system" Pod="goldmane-666569f655-fml4k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fml4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--fml4k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"10d0064d-4592-4240-a31b-0b629652a239", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-fml4k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9ac41687b95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:20.762467 containerd[1622]: 2025-12-16 03:18:20.695 [INFO][4791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Namespace="calico-system" Pod="goldmane-666569f655-fml4k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fml4k-eth0" Dec 16 03:18:20.762467 containerd[1622]: 2025-12-16 03:18:20.695 [INFO][4791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ac41687b95 ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Namespace="calico-system" Pod="goldmane-666569f655-fml4k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fml4k-eth0" Dec 16 03:18:20.762467 containerd[1622]: 2025-12-16 03:18:20.703 [INFO][4791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Namespace="calico-system" Pod="goldmane-666569f655-fml4k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fml4k-eth0" Dec 16 03:18:20.762467 containerd[1622]: 2025-12-16 03:18:20.704 [INFO][4791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Namespace="calico-system" Pod="goldmane-666569f655-fml4k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fml4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--fml4k-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"10d0064d-4592-4240-a31b-0b629652a239", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba", Pod:"goldmane-666569f655-fml4k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9ac41687b95", MAC:"5e:a3:f8:2c:04:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:20.762467 containerd[1622]: 2025-12-16 03:18:20.751 [INFO][4791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" Namespace="calico-system" Pod="goldmane-666569f655-fml4k" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fml4k-eth0" Dec 16 03:18:20.770000 audit[4909]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=4909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:20.770000 audit[4909]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd4a19b220 a2=0 a3=7ffd4a19b20c items=0 ppid=3025 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:20.777000 audit[4909]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=4909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:20.777000 audit[4909]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd4a19b220 a2=0 a3=0 items=0 ppid=3025 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:20.802000 audit[4912]: NETFILTER_CFG table=filter:137 family=2 entries=60 op=nft_register_chain pid=4912 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:20.802000 audit[4912]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7ffc234132e0 a2=0 a3=7ffc234132cc items=0 ppid=4109 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.802000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:20.825833 containerd[1622]: time="2025-12-16T03:18:20.824223809Z" level=info msg="connecting to shim dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba" address="unix:///run/containerd/s/7a11d1358ef9b71c9690244e225d5293fbb03693cae5fe0041ec5eef52e35df1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:20.865122 systemd-networkd[1519]: cali33601af3ad4: Gained IPv6LL Dec 16 03:18:20.870333 systemd[1]: Started cri-containerd-dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba.scope - libcontainer container dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba. Dec 16 03:18:20.878373 systemd-networkd[1519]: calieba7479d6ad: Link UP Dec 16 03:18:20.882565 systemd-networkd[1519]: calieba7479d6ad: Gained carrier Dec 16 03:18:20.900000 audit: BPF prog-id=249 op=LOAD Dec 16 03:18:20.901000 audit: BPF prog-id=250 op=LOAD Dec 16 03:18:20.901000 audit[4933]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616335613131333433323231656530653464633233316437356533 Dec 16 03:18:20.901000 audit: BPF prog-id=250 op=UNLOAD Dec 16 03:18:20.901000 audit[4933]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616335613131333433323231656530653464633233316437356533 Dec 16 03:18:20.901000 audit: BPF prog-id=251 op=LOAD Dec 16 03:18:20.901000 audit[4933]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616335613131333433323231656530653464633233316437356533 Dec 16 03:18:20.901000 audit: BPF prog-id=252 op=LOAD Dec 16 03:18:20.901000 audit[4933]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616335613131333433323231656530653464633233316437356533 Dec 16 03:18:20.901000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:18:20.901000 audit[4933]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616335613131333433323231656530653464633233316437356533 Dec 16 03:18:20.901000 audit: BPF prog-id=251 op=UNLOAD Dec 16 03:18:20.901000 audit[4933]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616335613131333433323231656530653464633233316437356533 Dec 16 03:18:20.902000 audit: BPF prog-id=253 op=LOAD Dec 16 03:18:20.902000 audit[4933]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4920 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:20.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616335613131333433323231656530653464633233316437356533 Dec 16 03:18:20.904403 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:20.973598 containerd[1622]: time="2025-12-16T03:18:20.973537363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fml4k,Uid:10d0064d-4592-4240-a31b-0b629652a239,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbac5a11343221ee0e4dc231d75e325b54483d06505647c0975dc861658bfcba\"" Dec 16 03:18:20.975224 containerd[1622]: time="2025-12-16T03:18:20.975184346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.413 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--p7kjv-eth0 csi-node-driver- calico-system 296bf1a5-e8cc-4c0a-a28b-66e55ce1e788 737 0 2025-12-16 03:17:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-p7kjv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieba7479d6ad [] [] }} ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Namespace="calico-system" Pod="csi-node-driver-p7kjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p7kjv-" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.413 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Namespace="calico-system" Pod="csi-node-driver-p7kjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p7kjv-eth0" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.546 [INFO][4841] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" HandleID="k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Workload="localhost-k8s-csi--node--driver--p7kjv-eth0" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.559 [INFO][4841] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" HandleID="k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Workload="localhost-k8s-csi--node--driver--p7kjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c3940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-p7kjv", "timestamp":"2025-12-16 03:18:20.54677124 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.559 [INFO][4841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.691 [INFO][4841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.692 [INFO][4841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.728 [INFO][4841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.768 [INFO][4841] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.790 [INFO][4841] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.796 [INFO][4841] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.803 [INFO][4841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.803 [INFO][4841] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.816 [INFO][4841] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.826 [INFO][4841] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.849 [INFO][4841] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.849 [INFO][4841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" host="localhost" Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.849 [INFO][4841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:18:20.984671 containerd[1622]: 2025-12-16 03:18:20.849 [INFO][4841] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" HandleID="k8s-pod-network.75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Workload="localhost-k8s-csi--node--driver--p7kjv-eth0" Dec 16 03:18:20.985883 containerd[1622]: 2025-12-16 03:18:20.856 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Namespace="calico-system" Pod="csi-node-driver-p7kjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p7kjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p7kjv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"296bf1a5-e8cc-4c0a-a28b-66e55ce1e788", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-p7kjv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieba7479d6ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:20.985883 containerd[1622]: 2025-12-16 03:18:20.857 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Namespace="calico-system" Pod="csi-node-driver-p7kjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p7kjv-eth0" Dec 16 03:18:20.985883 containerd[1622]: 2025-12-16 03:18:20.857 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieba7479d6ad ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Namespace="calico-system" Pod="csi-node-driver-p7kjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p7kjv-eth0" Dec 16 03:18:20.985883 containerd[1622]: 2025-12-16 03:18:20.883 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Namespace="calico-system" Pod="csi-node-driver-p7kjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p7kjv-eth0" Dec 16 03:18:20.985883 containerd[1622]: 2025-12-16 03:18:20.884 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Namespace="calico-system" Pod="csi-node-driver-p7kjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p7kjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p7kjv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"296bf1a5-e8cc-4c0a-a28b-66e55ce1e788", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f", Pod:"csi-node-driver-p7kjv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieba7479d6ad", MAC:"5a:09:46:1b:25:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:18:20.985883 containerd[1622]: 2025-12-16 03:18:20.977 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" Namespace="calico-system" Pod="csi-node-driver-p7kjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--p7kjv-eth0" Dec 16 03:18:21.016000 audit[4970]: NETFILTER_CFG table=filter:138 family=2 entries=56 op=nft_register_chain pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:18:21.016000 audit[4970]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7ffffef68ba0 a2=0 a3=7ffffef68b8c items=0 ppid=4109 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.016000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:18:21.037142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151155411.mount: Deactivated successfully. Dec 16 03:18:21.246435 systemd-networkd[1519]: cali751d49bde02: Gained IPv6LL Dec 16 03:18:21.335042 containerd[1622]: time="2025-12-16T03:18:21.327107293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:21.571762 kubelet[2859]: E1216 03:18:21.570558 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:21.576427 kubelet[2859]: E1216 03:18:21.576383 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:21.578487 kubelet[2859]: E1216 03:18:21.578372 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:18:21.637433 containerd[1622]: time="2025-12-16T03:18:21.637096981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:18:21.637730 containerd[1622]: time="2025-12-16T03:18:21.637676756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:21.638371 kubelet[2859]: E1216 03:18:21.638283 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:21.638371 kubelet[2859]: E1216 03:18:21.638363 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:21.638618 kubelet[2859]: E1216 03:18:21.638571 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ssht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fml4k_calico-system(10d0064d-4592-4240-a31b-0b629652a239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:21.639815 kubelet[2859]: E1216 03:18:21.639766 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:18:21.826518 kubelet[2859]: I1216 03:18:21.826305 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m4q4f" podStartSLOduration=41.826283138 podStartE2EDuration="41.826283138s" podCreationTimestamp="2025-12-16 03:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:18:21.825075152 +0000 UTC m=+46.646265543" watchObservedRunningTime="2025-12-16 03:18:21.826283138 +0000 UTC m=+46.647473509" Dec 16 03:18:21.836662 containerd[1622]: time="2025-12-16T03:18:21.836334094Z" level=info msg="connecting to shim 75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f" address="unix:///run/containerd/s/4998ac086688ae55055655f337a711705c3e479f67fa8e3ce7dba9a39f6a29e9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:18:21.877329 systemd[1]: Started cri-containerd-75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f.scope - libcontainer container 75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f. Dec 16 03:18:21.889000 audit[5012]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:21.889000 audit[5012]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb393e5b0 a2=0 a3=7ffcb393e59c items=0 ppid=3025 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:21.892000 audit: BPF prog-id=254 op=LOAD Dec 16 03:18:21.893000 audit: BPF prog-id=255 op=LOAD Dec 16 03:18:21.893000 audit[4992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4981 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735623061323535383566636635366531353337633536636266373434 Dec 16 03:18:21.893000 audit: BPF prog-id=255 op=UNLOAD Dec 16 03:18:21.893000 audit[4992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4981 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735623061323535383566636635366531353337633536636266373434 Dec 16 03:18:21.893000 audit: BPF prog-id=256 op=LOAD Dec 16 03:18:21.893000 audit[4992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4981 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735623061323535383566636635366531353337633536636266373434 Dec 16 03:18:21.893000 audit: BPF prog-id=257 op=LOAD Dec 16 03:18:21.893000 audit[4992]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4981 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735623061323535383566636635366531353337633536636266373434 Dec 16 03:18:21.893000 audit: BPF prog-id=257 op=UNLOAD Dec 16 03:18:21.893000 audit[4992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4981 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735623061323535383566636635366531353337633536636266373434 Dec 16 03:18:21.893000 audit: BPF prog-id=256 op=UNLOAD Dec 16 03:18:21.893000 audit[4992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4981 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735623061323535383566636635366531353337633536636266373434 Dec 16 03:18:21.893000 audit[5012]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=5012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:21.893000 audit: BPF prog-id=258 op=LOAD Dec 16 03:18:21.893000 audit[4992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4981 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735623061323535383566636635366531353337633536636266373434 Dec 16 03:18:21.893000 audit[5012]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb393e5b0 a2=0 a3=0 items=0 ppid=3025 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:21.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:21.896305 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:18:21.956092 containerd[1622]: time="2025-12-16T03:18:21.956037196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p7kjv,Uid:296bf1a5-e8cc-4c0a-a28b-66e55ce1e788,Namespace:calico-system,Attempt:0,} returns sandbox id \"75b0a25585fcf56e1537c56cbf744af674ce6f148e7f8a02e51eea69da0e9f3f\"" Dec 16 03:18:21.958119 containerd[1622]: time="2025-12-16T03:18:21.958083265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:18:22.014185 systemd-networkd[1519]: cali808e5270d56: Gained IPv6LL Dec 16 03:18:22.361206 containerd[1622]: time="2025-12-16T03:18:22.361150323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:22.371395 containerd[1622]: time="2025-12-16T03:18:22.371301247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:18:22.371395 containerd[1622]: time="2025-12-16T03:18:22.371367556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:22.371732 kubelet[2859]: E1216 03:18:22.371680 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:18:22.371861 kubelet[2859]: E1216 03:18:22.371753 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:18:22.372024 kubelet[2859]: E1216 03:18:22.371980 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8wc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:22.375423 containerd[1622]: time="2025-12-16T03:18:22.375114283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:18:22.466446 systemd-networkd[1519]: calieba7479d6ad: Gained IPv6LL Dec 16 03:18:22.585524 kubelet[2859]: E1216 03:18:22.585098 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:22.586087 kubelet[2859]: E1216 03:18:22.585922 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:18:22.587017 kubelet[2859]: E1216 03:18:22.586637 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:22.589209 systemd-networkd[1519]: cali9ac41687b95: Gained IPv6LL Dec 16 03:18:22.706037 containerd[1622]: time="2025-12-16T03:18:22.705784936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:22.760722 containerd[1622]: time="2025-12-16T03:18:22.760560169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:18:22.760722 containerd[1622]: time="2025-12-16T03:18:22.760625564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:22.761222 kubelet[2859]: E1216 03:18:22.761048 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:18:22.761222 kubelet[2859]: E1216 03:18:22.761121 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:18:22.761352 kubelet[2859]: E1216 03:18:22.761289 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8wc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:22.762747 kubelet[2859]: E1216 03:18:22.762604 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:18:22.764079 kubelet[2859]: I1216 03:18:22.764032 2859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7rq44" podStartSLOduration=42.76401822 podStartE2EDuration="42.76401822s" podCreationTimestamp="2025-12-16 03:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:18:22.268744738 +0000 UTC m=+47.089935119" watchObservedRunningTime="2025-12-16 03:18:22.76401822 +0000 UTC m=+47.585208591" Dec 16 03:18:23.058000 audit[5024]: NETFILTER_CFG table=filter:141 family=2 entries=20 op=nft_register_rule pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:23.058000 audit[5024]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd9021d990 a2=0 a3=7ffd9021d97c items=0 ppid=3025 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:23.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:23.065000 audit[5024]: NETFILTER_CFG table=nat:142 family=2 entries=14 op=nft_register_rule pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:23.065000 audit[5024]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd9021d990 a2=0 a3=0 items=0 ppid=3025 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:23.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:23.593152 kubelet[2859]: E1216 03:18:23.591038 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:23.596375 kubelet[2859]: E1216 03:18:23.595125 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:23.597763 kubelet[2859]: E1216 03:18:23.597676 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:18:24.095000 audit[5026]: NETFILTER_CFG table=filter:143 family=2 entries=17 op=nft_register_rule pid=5026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:24.095000 audit[5026]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6cb2adc0 a2=0 a3=7ffd6cb2adac items=0 ppid=3025 pid=5026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:24.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:24.116000 audit[5026]: NETFILTER_CFG table=nat:144 family=2 entries=47 op=nft_register_chain pid=5026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:18:24.116000 audit[5026]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd6cb2adc0 a2=0 a3=7ffd6cb2adac items=0 ppid=3025 pid=5026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:24.116000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:18:24.591138 kubelet[2859]: E1216 03:18:24.591084 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:30.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:40394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:30.135551 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:40394.service - OpenSSH per-connection server daemon (10.0.0.1:40394). Dec 16 03:18:30.137639 kernel: kauditd_printk_skb: 186 callbacks suppressed Dec 16 03:18:30.137720 kernel: audit: type=1130 audit(1765855110.133:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:40394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:30.281000 audit[5042]: USER_ACCT pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.283643 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 40394 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:18:30.286772 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:18:30.287580 containerd[1622]: time="2025-12-16T03:18:30.287431221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:18:30.281000 audit[5042]: CRED_ACQ pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.297432 systemd-logind[1600]: New session 9 of user core. Dec 16 03:18:30.312002 kernel: audit: type=1101 audit(1765855110.281:745): pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.312189 kernel: audit: type=1103 audit(1765855110.281:746): pid=5042 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.312241 kernel: audit: type=1006 audit(1765855110.281:747): pid=5042 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 03:18:30.281000 audit[5042]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef0811d50 a2=3 a3=0 items=0 ppid=1 pid=5042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:30.321462 kernel: audit: type=1300 audit(1765855110.281:747): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef0811d50 a2=3 a3=0 items=0 ppid=1 pid=5042 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:30.321746 kernel: audit: type=1327 audit(1765855110.281:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:30.281000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:30.325299 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:18:30.326000 audit[5042]: USER_START pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.329000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.342399 kernel: audit: type=1105 audit(1765855110.326:748): pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.342545 kernel: audit: type=1103 audit(1765855110.329:749): pid=5046 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.485651 sshd[5046]: Connection closed by 10.0.0.1 port 40394 Dec 16 03:18:30.485858 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Dec 16 03:18:30.485000 audit[5042]: USER_END pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.490468 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:40394.service: Deactivated successfully. Dec 16 03:18:30.492677 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:18:30.495748 systemd-logind[1600]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:18:30.497054 systemd-logind[1600]: Removed session 9. Dec 16 03:18:30.485000 audit[5042]: CRED_DISP pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.527000 kernel: audit: type=1106 audit(1765855110.485:750): pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.527137 kernel: audit: type=1104 audit(1765855110.485:751): pid=5042 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:30.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:40394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:30.727536 containerd[1622]: time="2025-12-16T03:18:30.727455524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:30.843691 containerd[1622]: time="2025-12-16T03:18:30.843628049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:30.843868 containerd[1622]: time="2025-12-16T03:18:30.843679648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:18:30.844051 kubelet[2859]: E1216 03:18:30.843999 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:18:30.844476 kubelet[2859]: E1216 03:18:30.844063 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:18:30.844476 kubelet[2859]: E1216 03:18:30.844267 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f11ab101807549ab927f64e4835ff8ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lx6vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d79fbfdc-v5h7c_calico-system(ff2dd774-9ae6-475f-a701-99b3146ecb38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:30.846416 containerd[1622]: time="2025-12-16T03:18:30.846391108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:18:31.222826 containerd[1622]: time="2025-12-16T03:18:31.222680514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:31.245309 containerd[1622]: time="2025-12-16T03:18:31.245211883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:18:31.245309 containerd[1622]: time="2025-12-16T03:18:31.245274903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:31.245586 kubelet[2859]: E1216 03:18:31.245521 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:18:31.245647 kubelet[2859]: E1216 03:18:31.245583 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:18:31.245803 kubelet[2859]: E1216 03:18:31.245742 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx6vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d79fbfdc-v5h7c_calico-system(ff2dd774-9ae6-475f-a701-99b3146ecb38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:31.247010 kubelet[2859]: E1216 03:18:31.246937 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:18:31.286602 containerd[1622]: time="2025-12-16T03:18:31.286551626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:31.636016 containerd[1622]: time="2025-12-16T03:18:31.635879583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:31.637432 containerd[1622]: time="2025-12-16T03:18:31.637374672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:31.637570 containerd[1622]: time="2025-12-16T03:18:31.637486367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:31.637687 kubelet[2859]: E1216 03:18:31.637626 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:31.637759 kubelet[2859]: E1216 03:18:31.637695 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:31.637917 kubelet[2859]: E1216 03:18:31.637861 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dtnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f875f69d-hv8xn_calico-apiserver(9063305a-9611-4527-b78f-50cf768d4751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:31.639139 kubelet[2859]: E1216 03:18:31.639099 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:18:32.286767 containerd[1622]: time="2025-12-16T03:18:32.286420753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:18:32.643088 containerd[1622]: time="2025-12-16T03:18:32.642909853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:32.747888 containerd[1622]: time="2025-12-16T03:18:32.747793692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:32.748111 containerd[1622]: time="2025-12-16T03:18:32.747855130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:18:32.748223 kubelet[2859]: E1216 03:18:32.748157 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:18:32.748741 kubelet[2859]: E1216 03:18:32.748222 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:18:32.748741 kubelet[2859]: E1216 03:18:32.748394 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lg9dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d5cb44968-bmmvj_calico-system(04744c4e-2de2-49b5-8dbb-8f920ac6f068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:32.749651 kubelet[2859]: E1216 03:18:32.749602 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:18:33.286787 containerd[1622]: time="2025-12-16T03:18:33.286739245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:18:33.650498 containerd[1622]: time="2025-12-16T03:18:33.650343631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:33.723333 containerd[1622]: time="2025-12-16T03:18:33.723188074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:18:33.723333 containerd[1622]: time="2025-12-16T03:18:33.723253939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:33.723333 containerd[1622]: time="2025-12-16T03:18:33.724983866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:33.725603 kubelet[2859]: E1216 03:18:33.723546 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:33.725603 kubelet[2859]: E1216 03:18:33.723615 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:33.725603 kubelet[2859]: E1216 03:18:33.723885 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ssht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fml4k_calico-system(10d0064d-4592-4240-a31b-0b629652a239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:33.725603 kubelet[2859]: E1216 03:18:33.725343 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:18:34.080394 containerd[1622]: time="2025-12-16T03:18:34.080303716Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:34.120371 containerd[1622]: time="2025-12-16T03:18:34.120224383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:34.120371 containerd[1622]: time="2025-12-16T03:18:34.120292413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:34.121006 kubelet[2859]: E1216 03:18:34.120562 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:34.121006 kubelet[2859]: E1216 03:18:34.120628 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:34.121006 kubelet[2859]: E1216 03:18:34.120801 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txz4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f875f69d-gkvpq_calico-apiserver(b257371e-206f-4ea1-9c38-a798a97242ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:34.122775 kubelet[2859]: E1216 03:18:34.122695 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:18:34.286661 containerd[1622]: time="2025-12-16T03:18:34.286620694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:18:34.787708 containerd[1622]: time="2025-12-16T03:18:34.787637418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:34.817375 containerd[1622]: time="2025-12-16T03:18:34.817267764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:34.817375 containerd[1622]: time="2025-12-16T03:18:34.817311739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:18:34.817657 kubelet[2859]: E1216 03:18:34.817604 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:18:34.817735 kubelet[2859]: E1216 03:18:34.817658 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:18:34.818052 kubelet[2859]: E1216 03:18:34.817975 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8wc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:34.818401 containerd[1622]: time="2025-12-16T03:18:34.818324333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:35.199250 containerd[1622]: time="2025-12-16T03:18:35.199086353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:35.220336 containerd[1622]: time="2025-12-16T03:18:35.220228554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:35.220535 containerd[1622]: time="2025-12-16T03:18:35.220262419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:35.220629 kubelet[2859]: E1216 03:18:35.220573 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:35.221087 kubelet[2859]: E1216 03:18:35.220642 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:35.221087 kubelet[2859]: E1216 03:18:35.221026 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhhvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-689f849d56-bpgq2_calico-apiserver(8653bc32-7498-477b-85bb-6a749d44ed95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:35.221274 containerd[1622]: time="2025-12-16T03:18:35.221242250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:18:35.222288 kubelet[2859]: E1216 03:18:35.222250 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:18:35.505674 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:40530.service - OpenSSH per-connection server daemon (10.0.0.1:40530). Dec 16 03:18:35.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.59:22-10.0.0.1:40530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:35.507869 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:18:35.508061 kernel: audit: type=1130 audit(1765855115.504:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.59:22-10.0.0.1:40530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:35.577579 containerd[1622]: time="2025-12-16T03:18:35.577337145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:35.580000 audit[5074]: USER_ACCT pid=5074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.588123 kernel: audit: type=1101 audit(1765855115.580:754): pid=5074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.585389 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:18:35.588502 sshd[5074]: Accepted publickey for core from 10.0.0.1 port 40530 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:18:35.582000 audit[5074]: CRED_ACQ pid=5074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.591493 systemd-logind[1600]: New session 10 of user core. Dec 16 03:18:35.596682 kernel: audit: type=1103 audit(1765855115.582:755): pid=5074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.596760 kernel: audit: type=1006 audit(1765855115.582:756): pid=5074 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 03:18:35.596805 kernel: audit: type=1300 audit(1765855115.582:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd082ef420 a2=3 a3=0 items=0 ppid=1 pid=5074 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:35.582000 audit[5074]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd082ef420 a2=3 a3=0 items=0 ppid=1 pid=5074 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:35.613159 kernel: audit: type=1327 audit(1765855115.582:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:35.613219 kernel: audit: type=1105 audit(1765855115.609:757): pid=5074 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.582000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:35.609000 audit[5074]: USER_START pid=5074 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.607368 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:18:35.609000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.633299 kernel: audit: type=1103 audit(1765855115.609:758): pid=5078 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.672620 containerd[1622]: time="2025-12-16T03:18:35.672282390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:35.672834 containerd[1622]: time="2025-12-16T03:18:35.672777675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:18:35.673102 kubelet[2859]: E1216 03:18:35.673049 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:18:35.673193 kubelet[2859]: E1216 03:18:35.673112 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:18:35.673334 kubelet[2859]: E1216 03:18:35.673273 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8wc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:35.674524 kubelet[2859]: E1216 03:18:35.674453 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:18:35.805706 sshd[5078]: Connection closed by 10.0.0.1 port 40530 Dec 16 03:18:35.806295 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Dec 16 03:18:35.807000 audit[5074]: USER_END pid=5074 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.812912 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:40530.service: Deactivated successfully. Dec 16 03:18:35.825268 kernel: audit: type=1106 audit(1765855115.807:759): pid=5074 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.825400 kernel: audit: type=1104 audit(1765855115.807:760): pid=5074 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.807000 audit[5074]: CRED_DISP pid=5074 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:35.820252 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:18:35.823540 systemd-logind[1600]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:18:35.824651 systemd-logind[1600]: Removed session 10. Dec 16 03:18:35.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.59:22-10.0.0.1:40530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:40.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:36582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:40.833400 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:36582.service - OpenSSH per-connection server daemon (10.0.0.1:36582). Dec 16 03:18:40.858330 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:18:40.858453 kernel: audit: type=1130 audit(1765855120.832:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:36582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:41.018216 sshd[5093]: Accepted publickey for core from 10.0.0.1 port 36582 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:18:41.013000 audit[5093]: USER_ACCT pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.023272 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:18:41.020000 audit[5093]: CRED_ACQ pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.042175 systemd-logind[1600]: New session 11 of user core. Dec 16 03:18:41.077363 kernel: audit: type=1101 audit(1765855121.013:763): pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.077744 kernel: audit: type=1103 audit(1765855121.020:764): pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.077774 kernel: audit: type=1006 audit(1765855121.020:765): pid=5093 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 03:18:41.020000 audit[5093]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc93b8b240 a2=3 a3=0 items=0 ppid=1 pid=5093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:41.116281 kernel: audit: type=1300 audit(1765855121.020:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc93b8b240 a2=3 a3=0 items=0 ppid=1 pid=5093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:41.020000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:41.132517 kernel: audit: type=1327 audit(1765855121.020:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:41.142891 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:18:41.156000 audit[5093]: USER_START pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.176000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.226480 kernel: audit: type=1105 audit(1765855121.156:766): pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.226638 kernel: audit: type=1103 audit(1765855121.176:767): pid=5101 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.465317 sshd[5101]: Connection closed by 10.0.0.1 port 36582 Dec 16 03:18:41.466483 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Dec 16 03:18:41.472000 audit[5093]: USER_END pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.481783 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:36582.service: Deactivated successfully. Dec 16 03:18:41.491255 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:18:41.494919 systemd-logind[1600]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:18:41.496831 systemd-logind[1600]: Removed session 11. Dec 16 03:18:41.525176 kernel: audit: type=1106 audit(1765855121.472:768): pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.472000 audit[5093]: CRED_DISP pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:41.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:36582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:41.540353 kernel: audit: type=1104 audit(1765855121.472:769): pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:43.297896 kubelet[2859]: E1216 03:18:43.297839 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:18:43.299031 kubelet[2859]: E1216 03:18:43.292012 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:18:43.476636 kubelet[2859]: E1216 03:18:43.476595 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:44.099119 kubelet[2859]: E1216 03:18:44.097918 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:44.287106 kubelet[2859]: E1216 03:18:44.286272 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:18:45.287291 kubelet[2859]: E1216 03:18:45.287117 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:18:46.289060 kubelet[2859]: E1216 03:18:46.288890 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:18:46.506189 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:18:46.506344 kernel: audit: type=1130 audit(1765855126.498:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.59:22-10.0.0.1:36716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:46.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.59:22-10.0.0.1:36716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:46.502476 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:36716.service - OpenSSH per-connection server daemon (10.0.0.1:36716). Dec 16 03:18:46.753000 audit[5166]: USER_ACCT pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:46.755343 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 36716 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:18:46.765306 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:18:46.760000 audit[5166]: CRED_ACQ pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:46.772396 kernel: audit: type=1101 audit(1765855126.753:772): pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:46.772535 kernel: audit: type=1103 audit(1765855126.760:773): pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:46.772576 kernel: audit: type=1006 audit(1765855126.760:774): pid=5166 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 03:18:46.776485 kernel: audit: type=1300 audit(1765855126.760:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb15833f0 a2=3 a3=0 items=0 ppid=1 pid=5166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:46.760000 audit[5166]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb15833f0 a2=3 a3=0 items=0 ppid=1 pid=5166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:46.784000 systemd-logind[1600]: New session 12 of user core. Dec 16 03:18:46.760000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:46.789528 kernel: audit: type=1327 audit(1765855126.760:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:46.810322 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:18:46.818000 audit[5166]: USER_START pid=5166 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:46.834775 kernel: audit: type=1105 audit(1765855126.818:775): pid=5166 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:46.834949 kernel: audit: type=1103 audit(1765855126.822:776): pid=5171 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:46.822000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:47.035351 sshd[5171]: Connection closed by 10.0.0.1 port 36716 Dec 16 03:18:47.036210 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Dec 16 03:18:47.040000 audit[5166]: USER_END pid=5166 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:47.048870 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:36716.service: Deactivated successfully. Dec 16 03:18:47.040000 audit[5166]: CRED_DISP pid=5166 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:47.056234 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:18:47.060128 systemd-logind[1600]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:18:47.067062 systemd-logind[1600]: Removed session 12. Dec 16 03:18:47.067752 kernel: audit: type=1106 audit(1765855127.040:777): pid=5166 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:47.067818 kernel: audit: type=1104 audit(1765855127.040:778): pid=5166 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:47.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.59:22-10.0.0.1:36716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:48.288899 kubelet[2859]: E1216 03:18:48.288728 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:18:50.285818 kubelet[2859]: E1216 03:18:50.285712 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:50.288274 kubelet[2859]: E1216 03:18:50.288162 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:18:51.285819 kubelet[2859]: E1216 03:18:51.285767 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:18:52.064314 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:60730.service - OpenSSH per-connection server daemon (10.0.0.1:60730). Dec 16 03:18:52.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.59:22-10.0.0.1:60730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:52.067064 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:18:52.067239 kernel: audit: type=1130 audit(1765855132.063:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.59:22-10.0.0.1:60730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:52.144138 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 60730 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:18:52.143000 audit[5192]: USER_ACCT pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.149323 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:18:52.146000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.157747 kernel: audit: type=1101 audit(1765855132.143:781): pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.157877 kernel: audit: type=1103 audit(1765855132.146:782): pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.157905 kernel: audit: type=1006 audit(1765855132.146:783): pid=5192 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 03:18:52.162655 kernel: audit: type=1300 audit(1765855132.146:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee4156a70 a2=3 a3=0 items=0 ppid=1 pid=5192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:52.146000 audit[5192]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee4156a70 a2=3 a3=0 items=0 ppid=1 pid=5192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:52.162174 systemd-logind[1600]: New session 13 of user core. Dec 16 03:18:52.146000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:52.184572 kernel: audit: type=1327 audit(1765855132.146:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:52.190440 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:18:52.195000 audit[5192]: USER_START pid=5192 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.208051 kernel: audit: type=1105 audit(1765855132.195:784): pid=5192 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.208194 kernel: audit: type=1103 audit(1765855132.199:785): pid=5196 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.199000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.383152 sshd[5196]: Connection closed by 10.0.0.1 port 60730 Dec 16 03:18:52.383423 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Dec 16 03:18:52.384000 audit[5192]: USER_END pid=5192 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.388629 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:60730.service: Deactivated successfully. Dec 16 03:18:52.391879 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:18:52.395793 systemd-logind[1600]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:18:52.397290 systemd-logind[1600]: Removed session 13. Dec 16 03:18:52.384000 audit[5192]: CRED_DISP pid=5192 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.417083 kernel: audit: type=1106 audit(1765855132.384:786): pid=5192 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.417264 kernel: audit: type=1104 audit(1765855132.384:787): pid=5192 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:52.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.59:22-10.0.0.1:60730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:54.288437 containerd[1622]: time="2025-12-16T03:18:54.288250634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:54.797452 containerd[1622]: time="2025-12-16T03:18:54.797362572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:54.970134 containerd[1622]: time="2025-12-16T03:18:54.970056694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:54.970356 containerd[1622]: time="2025-12-16T03:18:54.970076191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:54.970428 kubelet[2859]: E1216 03:18:54.970366 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:54.970881 kubelet[2859]: E1216 03:18:54.970428 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:54.970881 kubelet[2859]: E1216 03:18:54.970620 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dtnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f875f69d-hv8xn_calico-apiserver(9063305a-9611-4527-b78f-50cf768d4751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:54.974566 kubelet[2859]: E1216 03:18:54.972737 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:18:56.287591 containerd[1622]: time="2025-12-16T03:18:56.287226764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:56.761456 containerd[1622]: time="2025-12-16T03:18:56.761293478Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:56.797792 containerd[1622]: time="2025-12-16T03:18:56.797680918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:56.797792 containerd[1622]: time="2025-12-16T03:18:56.797752604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:56.798179 kubelet[2859]: E1216 03:18:56.798101 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:56.798651 kubelet[2859]: E1216 03:18:56.798185 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:56.798651 kubelet[2859]: E1216 03:18:56.798399 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txz4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f875f69d-gkvpq_calico-apiserver(b257371e-206f-4ea1-9c38-a798a97242ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:56.799689 kubelet[2859]: E1216 03:18:56.799654 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:18:57.287643 containerd[1622]: time="2025-12-16T03:18:57.287356074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:18:57.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:60852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:57.396259 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:60852.service - OpenSSH per-connection server daemon (10.0.0.1:60852). Dec 16 03:18:57.397764 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:18:57.397844 kernel: audit: type=1130 audit(1765855137.395:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:60852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:57.574000 audit[5221]: USER_ACCT pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:57.576272 sshd[5221]: Accepted publickey for core from 10.0.0.1 port 60852 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:18:57.580149 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:18:57.575000 audit[5221]: CRED_ACQ pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:57.586470 systemd-logind[1600]: New session 14 of user core. Dec 16 03:18:57.590853 kernel: audit: type=1101 audit(1765855137.574:790): pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:57.590939 kernel: audit: type=1103 audit(1765855137.575:791): pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:57.591027 kernel: audit: type=1006 audit(1765855137.575:792): pid=5221 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 03:18:57.575000 audit[5221]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5726c800 a2=3 a3=0 items=0 ppid=1 pid=5221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:57.632778 kernel: audit: type=1300 audit(1765855137.575:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5726c800 a2=3 a3=0 items=0 ppid=1 pid=5221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:57.632926 kernel: audit: type=1327 audit(1765855137.575:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:57.575000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:57.635469 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:18:57.637000 audit[5221]: USER_START pid=5221 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:57.693421 containerd[1622]: time="2025-12-16T03:18:57.693267981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:57.640000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:57.703397 kernel: audit: type=1105 audit(1765855137.637:793): pid=5221 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:57.703561 kernel: audit: type=1103 audit(1765855137.640:794): pid=5225 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:57.740714 containerd[1622]: time="2025-12-16T03:18:57.740618339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:18:57.741763 containerd[1622]: time="2025-12-16T03:18:57.741356559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:57.743019 kubelet[2859]: E1216 03:18:57.742782 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:18:57.744039 kubelet[2859]: E1216 03:18:57.743781 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:18:57.744553 kubelet[2859]: E1216 03:18:57.744458 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f11ab101807549ab927f64e4835ff8ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lx6vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d79fbfdc-v5h7c_calico-system(ff2dd774-9ae6-475f-a701-99b3146ecb38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:57.745928 containerd[1622]: time="2025-12-16T03:18:57.745873121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:18:58.141580 containerd[1622]: time="2025-12-16T03:18:58.141521190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:58.167776 sshd[5225]: Connection closed by 10.0.0.1 port 60852 Dec 16 03:18:58.168167 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Dec 16 03:18:58.168000 audit[5221]: USER_END pid=5221 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.212006 kernel: audit: type=1106 audit(1765855138.168:795): pid=5221 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.213167 kernel: audit: type=1104 audit(1765855138.168:796): pid=5221 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.168000 audit[5221]: CRED_DISP pid=5221 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.223234 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:60852.service: Deactivated successfully. Dec 16 03:18:58.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:60852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:58.225865 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:18:58.227066 systemd-logind[1600]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:18:58.230808 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:60858.service - OpenSSH per-connection server daemon (10.0.0.1:60858). Dec 16 03:18:58.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.59:22-10.0.0.1:60858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:58.232142 systemd-logind[1600]: Removed session 14. Dec 16 03:18:58.240790 containerd[1622]: time="2025-12-16T03:18:58.240716234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:18:58.240966 containerd[1622]: time="2025-12-16T03:18:58.240774034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:58.241055 kubelet[2859]: E1216 03:18:58.241005 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:18:58.241417 kubelet[2859]: E1216 03:18:58.241067 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:18:58.241455 kubelet[2859]: E1216 03:18:58.241372 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lg9dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d5cb44968-bmmvj_calico-system(04744c4e-2de2-49b5-8dbb-8f920ac6f068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:58.241589 containerd[1622]: time="2025-12-16T03:18:58.241481695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:18:58.242683 kubelet[2859]: E1216 03:18:58.242642 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:18:58.299000 audit[5239]: USER_ACCT pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.301088 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 60858 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:18:58.301000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.301000 audit[5239]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5d490a90 a2=3 a3=0 items=0 ppid=1 pid=5239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:58.301000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:58.303405 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:18:58.308634 systemd-logind[1600]: New session 15 of user core. Dec 16 03:18:58.318150 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:18:58.319000 audit[5239]: USER_START pid=5239 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.322000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.639642 sshd[5243]: Connection closed by 10.0.0.1 port 60858 Dec 16 03:18:58.640018 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Dec 16 03:18:58.640000 audit[5239]: USER_END pid=5239 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.641000 audit[5239]: CRED_DISP pid=5239 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.650127 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:60858.service: Deactivated successfully. Dec 16 03:18:58.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.59:22-10.0.0.1:60858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:58.652425 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:18:58.653353 systemd-logind[1600]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:18:58.656972 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:60874.service - OpenSSH per-connection server daemon (10.0.0.1:60874). Dec 16 03:18:58.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.59:22-10.0.0.1:60874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:58.658326 systemd-logind[1600]: Removed session 15. Dec 16 03:18:58.715320 containerd[1622]: time="2025-12-16T03:18:58.715257067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:58.721000 audit[5255]: USER_ACCT pid=5255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.722679 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 60874 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:18:58.722000 audit[5255]: CRED_ACQ pid=5255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.723000 audit[5255]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcba815830 a2=3 a3=0 items=0 ppid=1 pid=5255 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:18:58.723000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:18:58.726604 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:18:58.732039 systemd-logind[1600]: New session 16 of user core. Dec 16 03:18:58.746129 containerd[1622]: time="2025-12-16T03:18:58.746067205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:18:58.746315 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:18:58.746544 containerd[1622]: time="2025-12-16T03:18:58.746308062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:58.746760 kubelet[2859]: E1216 03:18:58.746724 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:58.746866 kubelet[2859]: E1216 03:18:58.746852 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:58.747257 containerd[1622]: time="2025-12-16T03:18:58.747226943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:18:58.747462 kubelet[2859]: E1216 03:18:58.747382 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ssht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fml4k_calico-system(10d0064d-4592-4240-a31b-0b629652a239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:58.748706 kubelet[2859]: E1216 03:18:58.748650 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:18:58.749000 audit[5255]: USER_START pid=5255 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.751000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.915822 sshd[5260]: Connection closed by 10.0.0.1 port 60874 Dec 16 03:18:58.916101 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Dec 16 03:18:58.918000 audit[5255]: USER_END pid=5255 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.918000 audit[5255]: CRED_DISP pid=5255 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:18:58.923603 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:60874.service: Deactivated successfully. Dec 16 03:18:58.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.59:22-10.0.0.1:60874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:18:58.926327 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:18:58.927631 systemd-logind[1600]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:18:58.929212 systemd-logind[1600]: Removed session 16. Dec 16 03:18:59.146148 containerd[1622]: time="2025-12-16T03:18:59.145893134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:59.156219 containerd[1622]: time="2025-12-16T03:18:59.156130704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:18:59.156347 containerd[1622]: time="2025-12-16T03:18:59.156155982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:59.156540 kubelet[2859]: E1216 03:18:59.156476 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:18:59.156540 kubelet[2859]: E1216 03:18:59.156536 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:18:59.156770 kubelet[2859]: E1216 03:18:59.156665 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx6vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d79fbfdc-v5h7c_calico-system(ff2dd774-9ae6-475f-a701-99b3146ecb38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:59.157986 kubelet[2859]: E1216 03:18:59.157886 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:19:01.284996 kubelet[2859]: E1216 03:19:01.284825 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:19:01.285804 containerd[1622]: time="2025-12-16T03:19:01.285747456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:19:01.627757 containerd[1622]: time="2025-12-16T03:19:01.627579614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:01.636413 containerd[1622]: time="2025-12-16T03:19:01.636324008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:19:01.636609 containerd[1622]: time="2025-12-16T03:19:01.636477248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:01.636880 kubelet[2859]: E1216 03:19:01.636817 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:19:01.636880 kubelet[2859]: E1216 03:19:01.636884 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:19:01.637877 kubelet[2859]: E1216 03:19:01.637781 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhhvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-689f849d56-bpgq2_calico-apiserver(8653bc32-7498-477b-85bb-6a749d44ed95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:01.639012 kubelet[2859]: E1216 03:19:01.638985 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:19:03.291079 containerd[1622]: time="2025-12-16T03:19:03.288731894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:19:03.782973 containerd[1622]: time="2025-12-16T03:19:03.782706076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:03.793695 containerd[1622]: time="2025-12-16T03:19:03.793389195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:19:03.793695 containerd[1622]: time="2025-12-16T03:19:03.793530181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:03.793898 kubelet[2859]: E1216 03:19:03.793834 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:19:03.794470 kubelet[2859]: E1216 03:19:03.793899 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:19:03.794886 kubelet[2859]: E1216 03:19:03.794816 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8wc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:03.800070 containerd[1622]: time="2025-12-16T03:19:03.800014219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:19:03.949066 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:19:03.949156 kernel: audit: type=1130 audit(1765855143.941:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.59:22-10.0.0.1:36814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:03.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.59:22-10.0.0.1:36814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:03.942736 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:36814.service - OpenSSH per-connection server daemon (10.0.0.1:36814). Dec 16 03:19:04.072000 audit[5275]: USER_ACCT pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.074112 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 36814 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:04.082473 sshd-session[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:04.074000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.098044 kernel: audit: type=1101 audit(1765855144.072:817): pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.099119 kernel: audit: type=1103 audit(1765855144.074:818): pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.099162 kernel: audit: type=1006 audit(1765855144.074:819): pid=5275 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 03:19:04.106349 systemd-logind[1600]: New session 17 of user core. Dec 16 03:19:04.111369 kernel: audit: type=1300 audit(1765855144.074:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc42b6b430 a2=3 a3=0 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:04.074000 audit[5275]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc42b6b430 a2=3 a3=0 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:04.119153 kernel: audit: type=1327 audit(1765855144.074:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:04.074000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:04.136762 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:19:04.144000 audit[5275]: USER_START pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.154000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.167052 kernel: audit: type=1105 audit(1765855144.144:820): pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.167154 kernel: audit: type=1103 audit(1765855144.154:821): pid=5279 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.170340 containerd[1622]: time="2025-12-16T03:19:04.169920505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:04.386595 containerd[1622]: time="2025-12-16T03:19:04.386325583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:19:04.386595 containerd[1622]: time="2025-12-16T03:19:04.386464236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:04.387676 kubelet[2859]: E1216 03:19:04.387357 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:19:04.387676 kubelet[2859]: E1216 03:19:04.387414 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:19:04.387805 kubelet[2859]: E1216 03:19:04.387755 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8wc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:04.389085 kubelet[2859]: E1216 03:19:04.389025 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:19:04.416600 sshd[5279]: Connection closed by 10.0.0.1 port 36814 Dec 16 03:19:04.419000 audit[5275]: USER_END pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.419000 audit[5275]: CRED_DISP pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.420794 sshd-session[5275]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:04.436156 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:36814.service: Deactivated successfully. Dec 16 03:19:04.439348 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:19:04.447461 kernel: audit: type=1106 audit(1765855144.419:822): pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.447854 kernel: audit: type=1104 audit(1765855144.419:823): pid=5275 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:04.447081 systemd-logind[1600]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:19:04.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.59:22-10.0.0.1:36814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:04.449303 systemd-logind[1600]: Removed session 17. Dec 16 03:19:08.300029 kubelet[2859]: E1216 03:19:08.299339 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:19:08.305609 kubelet[2859]: E1216 03:19:08.302255 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:19:08.677334 kernel: hrtimer: interrupt took 7497044 ns Dec 16 03:19:09.291466 kubelet[2859]: E1216 03:19:09.291033 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:19:09.449498 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:36962.service - OpenSSH per-connection server daemon (10.0.0.1:36962). Dec 16 03:19:09.461003 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:09.461090 kernel: audit: type=1130 audit(1765855149.455:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.59:22-10.0.0.1:36962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:09.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.59:22-10.0.0.1:36962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:09.528000 audit[5292]: USER_ACCT pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.535827 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 36962 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:09.538392 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:09.556439 kernel: audit: type=1101 audit(1765855149.528:826): pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.535000 audit[5292]: CRED_ACQ pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.567291 systemd-logind[1600]: New session 18 of user core. Dec 16 03:19:09.638941 kernel: audit: type=1103 audit(1765855149.535:827): pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.639109 kernel: audit: type=1006 audit(1765855149.535:828): pid=5292 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 03:19:09.639134 kernel: audit: type=1300 audit(1765855149.535:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd54c78580 a2=3 a3=0 items=0 ppid=1 pid=5292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:09.535000 audit[5292]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd54c78580 a2=3 a3=0 items=0 ppid=1 pid=5292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:09.659812 kernel: audit: type=1327 audit(1765855149.535:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:09.535000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:09.665180 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:19:09.671000 audit[5292]: USER_START pid=5292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.677000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.691189 kernel: audit: type=1105 audit(1765855149.671:829): pid=5292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.691286 kernel: audit: type=1103 audit(1765855149.677:830): pid=5296 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.884473 sshd[5296]: Connection closed by 10.0.0.1 port 36962 Dec 16 03:19:09.883035 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:09.884000 audit[5292]: USER_END pid=5292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.899623 systemd-logind[1600]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:19:09.906322 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:36962.service: Deactivated successfully. Dec 16 03:19:09.889000 audit[5292]: CRED_DISP pid=5292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.913409 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:19:09.918008 systemd-logind[1600]: Removed session 18. Dec 16 03:19:09.922380 kernel: audit: type=1106 audit(1765855149.884:831): pid=5292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.922485 kernel: audit: type=1104 audit(1765855149.889:832): pid=5292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:09.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.59:22-10.0.0.1:36962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:12.288452 kubelet[2859]: E1216 03:19:12.288001 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:19:12.290218 kubelet[2859]: E1216 03:19:12.289241 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:19:13.296722 kubelet[2859]: E1216 03:19:13.296473 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:19:14.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.59:22-10.0.0.1:46562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:14.905325 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:46562.service - OpenSSH per-connection server daemon (10.0.0.1:46562). Dec 16 03:19:14.908267 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:14.908333 kernel: audit: type=1130 audit(1765855154.904:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.59:22-10.0.0.1:46562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:15.100000 audit[5336]: USER_ACCT pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.105691 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 46562 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:15.114365 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:15.100000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.129401 kernel: audit: type=1101 audit(1765855155.100:835): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.129585 kernel: audit: type=1103 audit(1765855155.100:836): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.129622 kernel: audit: type=1006 audit(1765855155.100:837): pid=5336 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 03:19:15.141437 kernel: audit: type=1300 audit(1765855155.100:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4e119bb0 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:15.100000 audit[5336]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4e119bb0 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:15.137648 systemd-logind[1600]: New session 19 of user core. Dec 16 03:19:15.100000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:15.146453 kernel: audit: type=1327 audit(1765855155.100:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:15.150377 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:19:15.158000 audit[5336]: USER_START pid=5336 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.171145 kernel: audit: type=1105 audit(1765855155.158:838): pid=5336 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.171000 audit[5340]: CRED_ACQ pid=5340 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.185262 kernel: audit: type=1103 audit(1765855155.171:839): pid=5340 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.288342 kubelet[2859]: E1216 03:19:15.288276 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:19:15.378865 sshd[5340]: Connection closed by 10.0.0.1 port 46562 Dec 16 03:19:15.379327 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:15.381000 audit[5336]: USER_END pid=5336 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.393573 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:46562.service: Deactivated successfully. Dec 16 03:19:15.398853 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:19:15.401594 systemd-logind[1600]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:19:15.406677 kernel: audit: type=1106 audit(1765855155.381:840): pid=5336 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.406830 kernel: audit: type=1104 audit(1765855155.381:841): pid=5336 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.381000 audit[5336]: CRED_DISP pid=5336 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:15.403804 systemd-logind[1600]: Removed session 19. Dec 16 03:19:15.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.59:22-10.0.0.1:46562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:18.310553 kubelet[2859]: E1216 03:19:18.310486 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:19:20.288394 kubelet[2859]: E1216 03:19:20.288285 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:19:20.419828 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:20.420031 kernel: audit: type=1130 audit(1765855160.415:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.59:22-10.0.0.1:43256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:20.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.59:22-10.0.0.1:43256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:20.416487 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:43256.service - OpenSSH per-connection server daemon (10.0.0.1:43256). Dec 16 03:19:20.569947 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 43256 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:20.568000 audit[5357]: USER_ACCT pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.575638 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:20.572000 audit[5357]: CRED_ACQ pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.585797 kernel: audit: type=1101 audit(1765855160.568:844): pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.585921 kernel: audit: type=1103 audit(1765855160.572:845): pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.573000 audit[5357]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd57be9230 a2=3 a3=0 items=0 ppid=1 pid=5357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:20.598831 kernel: audit: type=1006 audit(1765855160.573:846): pid=5357 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 03:19:20.598928 kernel: audit: type=1300 audit(1765855160.573:846): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd57be9230 a2=3 a3=0 items=0 ppid=1 pid=5357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:20.598986 kernel: audit: type=1327 audit(1765855160.573:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:20.573000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:20.598696 systemd-logind[1600]: New session 20 of user core. Dec 16 03:19:20.609302 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:19:20.614000 audit[5357]: USER_START pid=5357 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.614000 audit[5361]: CRED_ACQ pid=5361 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.642917 kernel: audit: type=1105 audit(1765855160.614:847): pid=5357 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.643074 kernel: audit: type=1103 audit(1765855160.614:848): pid=5361 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.758384 sshd[5361]: Connection closed by 10.0.0.1 port 43256 Dec 16 03:19:20.758744 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:20.762000 audit[5357]: USER_END pid=5357 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.769042 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:43256.service: Deactivated successfully. Dec 16 03:19:20.773201 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:19:20.762000 audit[5357]: CRED_DISP pid=5357 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.782006 kernel: audit: type=1106 audit(1765855160.762:849): pid=5357 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.782116 kernel: audit: type=1104 audit(1765855160.762:850): pid=5357 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:20.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.59:22-10.0.0.1:43256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:20.776291 systemd-logind[1600]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:19:20.784120 systemd-logind[1600]: Removed session 20. Dec 16 03:19:22.295981 kubelet[2859]: E1216 03:19:22.295096 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:19:23.293781 kubelet[2859]: E1216 03:19:23.291268 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:19:23.294253 kubelet[2859]: E1216 03:19:23.294208 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:19:25.820587 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:43396.service - OpenSSH per-connection server daemon (10.0.0.1:43396). Dec 16 03:19:25.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.59:22-10.0.0.1:43396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:25.842226 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:25.842370 kernel: audit: type=1130 audit(1765855165.826:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.59:22-10.0.0.1:43396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:25.977000 audit[5375]: USER_ACCT pid=5375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:25.986521 sshd[5375]: Accepted publickey for core from 10.0.0.1 port 43396 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:25.987860 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:26.000643 kernel: audit: type=1101 audit(1765855165.977:853): pid=5375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.000903 kernel: audit: type=1103 audit(1765855165.985:854): pid=5375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:25.985000 audit[5375]: CRED_ACQ pid=5375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.009873 kernel: audit: type=1006 audit(1765855165.985:855): pid=5375 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 03:19:25.985000 audit[5375]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb423a9f0 a2=3 a3=0 items=0 ppid=1 pid=5375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:25.985000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:26.019291 kernel: audit: type=1300 audit(1765855165.985:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb423a9f0 a2=3 a3=0 items=0 ppid=1 pid=5375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:26.019488 kernel: audit: type=1327 audit(1765855165.985:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:26.028599 systemd-logind[1600]: New session 21 of user core. Dec 16 03:19:26.039265 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:19:26.049000 audit[5375]: USER_START pid=5375 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.053000 audit[5379]: CRED_ACQ pid=5379 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.088322 kernel: audit: type=1105 audit(1765855166.049:856): pid=5375 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.088391 kernel: audit: type=1103 audit(1765855166.053:857): pid=5379 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.291307 kubelet[2859]: E1216 03:19:26.291208 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:19:26.532442 sshd[5379]: Connection closed by 10.0.0.1 port 43396 Dec 16 03:19:26.532826 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:26.535000 audit[5375]: USER_END pid=5375 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.547380 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:43396.service: Deactivated successfully. Dec 16 03:19:26.535000 audit[5375]: CRED_DISP pid=5375 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.560306 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:19:26.564776 kernel: audit: type=1106 audit(1765855166.535:858): pid=5375 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.564994 kernel: audit: type=1104 audit(1765855166.535:859): pid=5375 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:26.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.59:22-10.0.0.1:43396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:26.568141 systemd-logind[1600]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:19:26.572927 systemd-logind[1600]: Removed session 21. Dec 16 03:19:28.289347 kubelet[2859]: E1216 03:19:28.289091 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:19:29.302736 kubelet[2859]: E1216 03:19:29.293051 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:19:31.556428 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:39016.service - OpenSSH per-connection server daemon (10.0.0.1:39016). Dec 16 03:19:31.565802 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:31.565905 kernel: audit: type=1130 audit(1765855171.555:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:39016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:31.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:39016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:31.697815 sshd[5392]: Accepted publickey for core from 10.0.0.1 port 39016 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:31.696000 audit[5392]: USER_ACCT pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.703888 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:31.704982 kernel: audit: type=1101 audit(1765855171.696:862): pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.700000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.721258 kernel: audit: type=1103 audit(1765855171.700:863): pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.721427 kernel: audit: type=1006 audit(1765855171.700:864): pid=5392 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 03:19:31.721520 kernel: audit: type=1300 audit(1765855171.700:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffba33e5d0 a2=3 a3=0 items=0 ppid=1 pid=5392 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:31.700000 audit[5392]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffba33e5d0 a2=3 a3=0 items=0 ppid=1 pid=5392 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:31.733102 kernel: audit: type=1327 audit(1765855171.700:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:31.700000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:31.730278 systemd-logind[1600]: New session 22 of user core. Dec 16 03:19:31.749124 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:19:31.755000 audit[5392]: USER_START pid=5392 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.766000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.778509 kernel: audit: type=1105 audit(1765855171.755:865): pid=5392 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.778581 kernel: audit: type=1103 audit(1765855171.766:866): pid=5396 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.931404 sshd[5396]: Connection closed by 10.0.0.1 port 39016 Dec 16 03:19:31.930661 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:31.931000 audit[5392]: USER_END pid=5392 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.946313 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:39016.service: Deactivated successfully. Dec 16 03:19:31.952902 kernel: audit: type=1106 audit(1765855171.931:867): pid=5392 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.938000 audit[5392]: CRED_DISP pid=5392 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.955463 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:19:31.957784 systemd-logind[1600]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:19:31.967409 kernel: audit: type=1104 audit(1765855171.938:868): pid=5392 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:31.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:39016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:31.973702 systemd-logind[1600]: Removed session 22. Dec 16 03:19:33.299335 kubelet[2859]: E1216 03:19:33.296587 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:19:33.299335 kubelet[2859]: E1216 03:19:33.297797 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:19:34.298585 kubelet[2859]: E1216 03:19:34.298042 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:19:35.290382 containerd[1622]: time="2025-12-16T03:19:35.289740568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:19:35.294663 kubelet[2859]: E1216 03:19:35.294430 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:19:35.817589 containerd[1622]: time="2025-12-16T03:19:35.817429826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:35.823587 containerd[1622]: time="2025-12-16T03:19:35.822863977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:35.823587 containerd[1622]: time="2025-12-16T03:19:35.822989984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:19:35.823805 kubelet[2859]: E1216 03:19:35.823402 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:19:35.823805 kubelet[2859]: E1216 03:19:35.823467 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:19:35.826204 kubelet[2859]: E1216 03:19:35.825834 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dtnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f875f69d-hv8xn_calico-apiserver(9063305a-9611-4527-b78f-50cf768d4751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:35.827280 kubelet[2859]: E1216 03:19:35.827151 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:19:36.959794 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:39170.service - OpenSSH per-connection server daemon (10.0.0.1:39170). Dec 16 03:19:36.966889 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:36.967013 kernel: audit: type=1130 audit(1765855176.961:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:39170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:36.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:39170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:37.101000 audit[5417]: USER_ACCT pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.103974 sshd[5417]: Accepted publickey for core from 10.0.0.1 port 39170 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:37.110364 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:37.118636 kernel: audit: type=1101 audit(1765855177.101:871): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.120045 kernel: audit: type=1103 audit(1765855177.106:872): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.106000 audit[5417]: CRED_ACQ pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.132869 kernel: audit: type=1006 audit(1765855177.108:873): pid=5417 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 03:19:37.133021 kernel: audit: type=1300 audit(1765855177.108:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd39fb8520 a2=3 a3=0 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:37.108000 audit[5417]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd39fb8520 a2=3 a3=0 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:37.143361 kernel: audit: type=1327 audit(1765855177.108:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:37.108000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:37.148934 systemd-logind[1600]: New session 23 of user core. Dec 16 03:19:37.165679 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:19:37.186000 audit[5417]: USER_START pid=5417 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.208150 kernel: audit: type=1105 audit(1765855177.186:874): pid=5417 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.207000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.219235 kernel: audit: type=1103 audit(1765855177.207:875): pid=5421 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.399085 sshd[5421]: Connection closed by 10.0.0.1 port 39170 Dec 16 03:19:37.401840 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:37.406000 audit[5417]: USER_END pid=5417 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.406000 audit[5417]: CRED_DISP pid=5417 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.417234 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:39170.service: Deactivated successfully. Dec 16 03:19:37.429094 kernel: audit: type=1106 audit(1765855177.406:876): pid=5417 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.429425 kernel: audit: type=1104 audit(1765855177.406:877): pid=5417 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:37.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:39170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:37.423859 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:19:37.429038 systemd-logind[1600]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:19:37.432245 systemd-logind[1600]: Removed session 23. Dec 16 03:19:38.288549 kubelet[2859]: E1216 03:19:38.285749 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:19:38.294605 containerd[1622]: time="2025-12-16T03:19:38.293146343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:19:38.690025 containerd[1622]: time="2025-12-16T03:19:38.689237232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:38.744630 containerd[1622]: time="2025-12-16T03:19:38.740894723Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:19:38.744630 containerd[1622]: time="2025-12-16T03:19:38.741076386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:38.748938 kubelet[2859]: E1216 03:19:38.741903 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:19:38.748938 kubelet[2859]: E1216 03:19:38.744175 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:19:38.755438 kubelet[2859]: E1216 03:19:38.749631 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f11ab101807549ab927f64e4835ff8ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lx6vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d79fbfdc-v5h7c_calico-system(ff2dd774-9ae6-475f-a701-99b3146ecb38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:38.755438 kubelet[2859]: E1216 03:19:38.754418 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:19:40.289919 kubelet[2859]: E1216 03:19:40.286884 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:19:42.436506 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:42.436617 kernel: audit: type=1130 audit(1765855182.428:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:44014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:42.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:44014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:42.429458 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:44014.service - OpenSSH per-connection server daemon (10.0.0.1:44014). Dec 16 03:19:42.563882 sshd[5438]: Accepted publickey for core from 10.0.0.1 port 44014 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:42.562000 audit[5438]: USER_ACCT pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.573918 sshd-session[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:42.570000 audit[5438]: CRED_ACQ pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.603382 kernel: audit: type=1101 audit(1765855182.562:880): pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.603515 kernel: audit: type=1103 audit(1765855182.570:881): pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.603558 kernel: audit: type=1006 audit(1765855182.570:882): pid=5438 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 03:19:42.603823 systemd-logind[1600]: New session 24 of user core. Dec 16 03:19:42.570000 audit[5438]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd995ce8c0 a2=3 a3=0 items=0 ppid=1 pid=5438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:42.636726 kernel: audit: type=1300 audit(1765855182.570:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd995ce8c0 a2=3 a3=0 items=0 ppid=1 pid=5438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:42.636889 kernel: audit: type=1327 audit(1765855182.570:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:42.570000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:42.647358 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 03:19:42.653000 audit[5438]: USER_START pid=5438 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.660000 audit[5442]: CRED_ACQ pid=5442 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.672505 kernel: audit: type=1105 audit(1765855182.653:883): pid=5438 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.672636 kernel: audit: type=1103 audit(1765855182.660:884): pid=5442 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.901501 sshd[5442]: Connection closed by 10.0.0.1 port 44014 Dec 16 03:19:42.903935 sshd-session[5438]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:42.909000 audit[5438]: USER_END pid=5438 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.932910 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:44014.service: Deactivated successfully. Dec 16 03:19:42.909000 audit[5438]: CRED_DISP pid=5438 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.951593 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 03:19:42.982696 kernel: audit: type=1106 audit(1765855182.909:885): pid=5438 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.982817 kernel: audit: type=1104 audit(1765855182.909:886): pid=5438 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:42.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:44014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:42.982005 systemd-logind[1600]: Session 24 logged out. Waiting for processes to exit. Dec 16 03:19:42.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.59:22-10.0.0.1:44024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:42.998184 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:44024.service - OpenSSH per-connection server daemon (10.0.0.1:44024). Dec 16 03:19:43.012397 systemd-logind[1600]: Removed session 24. Dec 16 03:19:43.162893 sshd[5456]: Accepted publickey for core from 10.0.0.1 port 44024 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:43.161000 audit[5456]: USER_ACCT pid=5456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.173000 audit[5456]: CRED_ACQ pid=5456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.173000 audit[5456]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1ed43fc0 a2=3 a3=0 items=0 ppid=1 pid=5456 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:43.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:43.175834 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:43.186291 systemd-logind[1600]: New session 25 of user core. Dec 16 03:19:43.214185 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 03:19:43.219000 audit[5456]: USER_START pid=5456 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.222000 audit[5460]: CRED_ACQ pid=5460 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.720486 sshd[5460]: Connection closed by 10.0.0.1 port 44024 Dec 16 03:19:43.721229 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:43.721000 audit[5456]: USER_END pid=5456 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.722000 audit[5456]: CRED_DISP pid=5456 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.732319 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:44024.service: Deactivated successfully. Dec 16 03:19:43.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.59:22-10.0.0.1:44024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:43.734868 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 03:19:43.735982 systemd-logind[1600]: Session 25 logged out. Waiting for processes to exit. Dec 16 03:19:43.740647 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:44030.service - OpenSSH per-connection server daemon (10.0.0.1:44030). Dec 16 03:19:43.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.59:22-10.0.0.1:44030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:43.742007 systemd-logind[1600]: Removed session 25. Dec 16 03:19:43.803000 audit[5472]: USER_ACCT pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.804296 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 44030 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:43.804000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.804000 audit[5472]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdfdd2cc0 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:43.804000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:43.807148 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:43.812079 systemd-logind[1600]: New session 26 of user core. Dec 16 03:19:43.824185 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 03:19:43.826000 audit[5472]: USER_START pid=5472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:43.828000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.370000 audit[5515]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:44.370000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe4461a510 a2=0 a3=7ffe4461a4fc items=0 ppid=3025 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:44.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:44.376000 audit[5515]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:44.376000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe4461a510 a2=0 a3=0 items=0 ppid=3025 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:44.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:44.398210 sshd[5476]: Connection closed by 10.0.0.1 port 44030 Dec 16 03:19:44.400584 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:44.402000 audit[5472]: USER_END pid=5472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.402000 audit[5472]: CRED_DISP pid=5472 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.414711 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:44030.service: Deactivated successfully. Dec 16 03:19:44.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.59:22-10.0.0.1:44030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:44.422469 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 03:19:44.428355 systemd-logind[1600]: Session 26 logged out. Waiting for processes to exit. Dec 16 03:19:44.439455 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:44038.service - OpenSSH per-connection server daemon (10.0.0.1:44038). Dec 16 03:19:44.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.59:22-10.0.0.1:44038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:44.442962 systemd-logind[1600]: Removed session 26. Dec 16 03:19:44.445000 audit[5522]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=5522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:44.445000 audit[5522]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd212f7380 a2=0 a3=7ffd212f736c items=0 ppid=3025 pid=5522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:44.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:44.452000 audit[5522]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:44.452000 audit[5522]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd212f7380 a2=0 a3=0 items=0 ppid=3025 pid=5522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:44.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:44.503000 audit[5521]: USER_ACCT pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.504465 sshd[5521]: Accepted publickey for core from 10.0.0.1 port 44038 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:44.504000 audit[5521]: CRED_ACQ pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.504000 audit[5521]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb20b8540 a2=3 a3=0 items=0 ppid=1 pid=5521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:44.504000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:44.506769 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:44.512194 systemd-logind[1600]: New session 27 of user core. Dec 16 03:19:44.516102 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 03:19:44.517000 audit[5521]: USER_START pid=5521 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.520000 audit[5526]: CRED_ACQ pid=5526 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.724713 sshd[5526]: Connection closed by 10.0.0.1 port 44038 Dec 16 03:19:44.726106 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:44.729000 audit[5521]: USER_END pid=5521 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.729000 audit[5521]: CRED_DISP pid=5521 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.739518 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:44038.service: Deactivated successfully. Dec 16 03:19:44.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.59:22-10.0.0.1:44038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:44.741825 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 03:19:44.742909 systemd-logind[1600]: Session 27 logged out. Waiting for processes to exit. Dec 16 03:19:44.746377 systemd[1]: Started sshd@26-10.0.0.59:22-10.0.0.1:44042.service - OpenSSH per-connection server daemon (10.0.0.1:44042). Dec 16 03:19:44.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.59:22-10.0.0.1:44042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:44.747616 systemd-logind[1600]: Removed session 27. Dec 16 03:19:44.806000 audit[5538]: USER_ACCT pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.807214 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 44042 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:44.807000 audit[5538]: CRED_ACQ pid=5538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.807000 audit[5538]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7e0d7fa0 a2=3 a3=0 items=0 ppid=1 pid=5538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:44.807000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:44.809565 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:44.815812 systemd-logind[1600]: New session 28 of user core. Dec 16 03:19:44.825219 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 03:19:44.826000 audit[5538]: USER_START pid=5538 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.829000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.914064 sshd[5542]: Connection closed by 10.0.0.1 port 44042 Dec 16 03:19:44.914426 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:44.914000 audit[5538]: USER_END pid=5538 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.915000 audit[5538]: CRED_DISP pid=5538 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:44.919331 systemd[1]: sshd@26-10.0.0.59:22-10.0.0.1:44042.service: Deactivated successfully. Dec 16 03:19:44.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.59:22-10.0.0.1:44042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:44.921571 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 03:19:44.922544 systemd-logind[1600]: Session 28 logged out. Waiting for processes to exit. Dec 16 03:19:44.923922 systemd-logind[1600]: Removed session 28. Dec 16 03:19:46.285994 kubelet[2859]: E1216 03:19:46.285902 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:19:46.286929 containerd[1622]: time="2025-12-16T03:19:46.286883235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:19:46.664320 containerd[1622]: time="2025-12-16T03:19:46.664157823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:46.686720 containerd[1622]: time="2025-12-16T03:19:46.686602463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:19:46.686877 containerd[1622]: time="2025-12-16T03:19:46.686784397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:46.687172 kubelet[2859]: E1216 03:19:46.687090 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:19:46.687172 kubelet[2859]: E1216 03:19:46.687162 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:19:46.687506 kubelet[2859]: E1216 03:19:46.687322 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lg9dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d5cb44968-bmmvj_calico-system(04744c4e-2de2-49b5-8dbb-8f920ac6f068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:46.688643 kubelet[2859]: E1216 03:19:46.688584 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:19:47.285870 kubelet[2859]: E1216 03:19:47.285584 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:19:47.286858 kubelet[2859]: E1216 03:19:47.286504 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:19:48.286527 containerd[1622]: time="2025-12-16T03:19:48.286470609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:19:48.632157 containerd[1622]: time="2025-12-16T03:19:48.632004123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:48.713383 containerd[1622]: time="2025-12-16T03:19:48.713268108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:48.713383 containerd[1622]: time="2025-12-16T03:19:48.713365602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:19:48.713658 kubelet[2859]: E1216 03:19:48.713604 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:19:48.714105 kubelet[2859]: E1216 03:19:48.713659 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:19:48.714105 kubelet[2859]: E1216 03:19:48.713897 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8wc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:48.714235 containerd[1622]: time="2025-12-16T03:19:48.714205966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:19:49.331332 containerd[1622]: time="2025-12-16T03:19:49.331267790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:49.495620 containerd[1622]: time="2025-12-16T03:19:49.495541539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:19:49.495775 containerd[1622]: time="2025-12-16T03:19:49.495579771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:49.495903 kubelet[2859]: E1216 03:19:49.495839 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:19:49.496013 kubelet[2859]: E1216 03:19:49.495906 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:19:49.496182 kubelet[2859]: E1216 03:19:49.496128 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txz4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f875f69d-gkvpq_calico-apiserver(b257371e-206f-4ea1-9c38-a798a97242ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:49.496523 containerd[1622]: time="2025-12-16T03:19:49.496496138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:19:49.497672 kubelet[2859]: E1216 03:19:49.497630 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:19:49.917675 containerd[1622]: time="2025-12-16T03:19:49.917592035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:49.926186 systemd[1]: Started sshd@27-10.0.0.59:22-10.0.0.1:44224.service - OpenSSH per-connection server daemon (10.0.0.1:44224). Dec 16 03:19:49.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.59:22-10.0.0.1:44224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:49.932411 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 03:19:49.932523 kernel: audit: type=1130 audit(1765855189.925:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.59:22-10.0.0.1:44224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:49.981000 audit[5563]: USER_ACCT pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:49.982881 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 44224 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:49.985309 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:49.990031 systemd-logind[1600]: New session 29 of user core. Dec 16 03:19:49.983000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.025621 kernel: audit: type=1101 audit(1765855189.981:929): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.025688 kernel: audit: type=1103 audit(1765855189.983:930): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.025719 kernel: audit: type=1006 audit(1765855189.983:931): pid=5563 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 16 03:19:49.983000 audit[5563]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc98b19210 a2=3 a3=0 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:50.035016 kernel: audit: type=1300 audit(1765855189.983:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc98b19210 a2=3 a3=0 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:50.035128 kernel: audit: type=1327 audit(1765855189.983:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:49.983000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:50.044351 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 03:19:50.046000 audit[5563]: USER_START pid=5563 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.049000 audit[5567]: CRED_ACQ pid=5567 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.060310 kernel: audit: type=1105 audit(1765855190.046:932): pid=5563 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.060373 kernel: audit: type=1103 audit(1765855190.049:933): pid=5567 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.080100 containerd[1622]: time="2025-12-16T03:19:50.080030428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:50.080303 containerd[1622]: time="2025-12-16T03:19:50.080126189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:19:50.080543 kubelet[2859]: E1216 03:19:50.080483 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:19:50.081123 kubelet[2859]: E1216 03:19:50.080551 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:19:50.081123 kubelet[2859]: E1216 03:19:50.080944 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8wc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p7kjv_calico-system(296bf1a5-e8cc-4c0a-a28b-66e55ce1e788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:50.081280 containerd[1622]: time="2025-12-16T03:19:50.080901820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:19:50.082995 kubelet[2859]: E1216 03:19:50.082743 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:19:50.149912 sshd[5567]: Connection closed by 10.0.0.1 port 44224 Dec 16 03:19:50.150311 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:50.150000 audit[5563]: USER_END pid=5563 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.159275 systemd[1]: sshd@27-10.0.0.59:22-10.0.0.1:44224.service: Deactivated successfully. Dec 16 03:19:50.161910 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 03:19:50.163752 systemd-logind[1600]: Session 29 logged out. Waiting for processes to exit. Dec 16 03:19:50.165317 systemd-logind[1600]: Removed session 29. Dec 16 03:19:50.152000 audit[5563]: CRED_DISP pid=5563 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.190065 kernel: audit: type=1106 audit(1765855190.150:934): pid=5563 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.190136 kernel: audit: type=1104 audit(1765855190.152:935): pid=5563 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:50.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.59:22-10.0.0.1:44224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:50.549680 containerd[1622]: time="2025-12-16T03:19:50.549621131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:50.624459 containerd[1622]: time="2025-12-16T03:19:50.624381867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:19:50.624653 containerd[1622]: time="2025-12-16T03:19:50.624415902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:50.624693 kubelet[2859]: E1216 03:19:50.624651 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:19:50.624744 kubelet[2859]: E1216 03:19:50.624708 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:19:50.625496 kubelet[2859]: E1216 03:19:50.624868 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ssht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fml4k_calico-system(10d0064d-4592-4240-a31b-0b629652a239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:50.626036 kubelet[2859]: E1216 03:19:50.626008 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:19:51.290588 containerd[1622]: time="2025-12-16T03:19:51.290526623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:19:51.716097 containerd[1622]: time="2025-12-16T03:19:51.715746685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:51.779430 containerd[1622]: time="2025-12-16T03:19:51.779306374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:19:51.779430 containerd[1622]: time="2025-12-16T03:19:51.779361318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:51.779688 kubelet[2859]: E1216 03:19:51.779590 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:19:51.779688 kubelet[2859]: E1216 03:19:51.779653 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:19:51.780315 kubelet[2859]: E1216 03:19:51.780252 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx6vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d79fbfdc-v5h7c_calico-system(ff2dd774-9ae6-475f-a701-99b3146ecb38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:51.782329 kubelet[2859]: E1216 03:19:51.782224 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:19:52.285753 kubelet[2859]: E1216 03:19:52.285692 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:19:53.286699 containerd[1622]: time="2025-12-16T03:19:53.286629039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:19:53.720856 containerd[1622]: time="2025-12-16T03:19:53.720715487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:53.765308 containerd[1622]: time="2025-12-16T03:19:53.765236738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:53.765465 containerd[1622]: time="2025-12-16T03:19:53.765295699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:19:53.765644 kubelet[2859]: E1216 03:19:53.765584 2859 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:19:53.766025 kubelet[2859]: E1216 03:19:53.765656 2859 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:19:53.766025 kubelet[2859]: E1216 03:19:53.765888 2859 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhhvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-689f849d56-bpgq2_calico-apiserver(8653bc32-7498-477b-85bb-6a749d44ed95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:53.767108 kubelet[2859]: E1216 03:19:53.767079 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:19:55.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.59:22-10.0.0.1:59356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:55.169811 systemd[1]: Started sshd@28-10.0.0.59:22-10.0.0.1:59356.service - OpenSSH per-connection server daemon (10.0.0.1:59356). Dec 16 03:19:55.189985 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:55.190128 kernel: audit: type=1130 audit(1765855195.168:937): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.59:22-10.0.0.1:59356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:55.254000 audit[5594]: USER_ACCT pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.256359 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 59356 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:19:55.259734 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:55.264946 systemd-logind[1600]: New session 30 of user core. Dec 16 03:19:55.255000 audit[5594]: CRED_ACQ pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.285724 kernel: audit: type=1101 audit(1765855195.254:938): pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.285845 kernel: audit: type=1103 audit(1765855195.255:939): pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.285873 kernel: audit: type=1006 audit(1765855195.255:940): pid=5594 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Dec 16 03:19:55.255000 audit[5594]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedc6fdb10 a2=3 a3=0 items=0 ppid=1 pid=5594 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:55.289389 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 16 03:19:55.255000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:55.297391 kernel: audit: type=1300 audit(1765855195.255:940): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedc6fdb10 a2=3 a3=0 items=0 ppid=1 pid=5594 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:55.297457 kernel: audit: type=1327 audit(1765855195.255:940): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:55.297485 kernel: audit: type=1105 audit(1765855195.295:941): pid=5594 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.295000 audit[5594]: USER_START pid=5594 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.296000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.353064 kernel: audit: type=1103 audit(1765855195.296:942): pid=5598 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.468386 sshd[5598]: Connection closed by 10.0.0.1 port 59356 Dec 16 03:19:55.469196 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:55.470000 audit[5594]: USER_END pid=5594 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.475907 systemd-logind[1600]: Session 30 logged out. Waiting for processes to exit. Dec 16 03:19:55.476395 systemd[1]: sshd@28-10.0.0.59:22-10.0.0.1:59356.service: Deactivated successfully. Dec 16 03:19:55.478353 systemd[1]: session-30.scope: Deactivated successfully. Dec 16 03:19:55.480437 systemd-logind[1600]: Removed session 30. Dec 16 03:19:55.470000 audit[5594]: CRED_DISP pid=5594 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.526827 kernel: audit: type=1106 audit(1765855195.470:943): pid=5594 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.527080 kernel: audit: type=1104 audit(1765855195.470:944): pid=5594 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:19:55.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.59:22-10.0.0.1:59356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:58.925000 audit[5611]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:58.925000 audit[5611]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd367b0780 a2=0 a3=7ffd367b076c items=0 ppid=3025 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:58.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:58.932000 audit[5611]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=5611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:58.932000 audit[5611]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd367b0780 a2=0 a3=7ffd367b076c items=0 ppid=3025 pid=5611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:58.932000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:59.286548 kubelet[2859]: E1216 03:19:59.286437 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d5cb44968-bmmvj" podUID="04744c4e-2de2-49b5-8dbb-8f920ac6f068" Dec 16 03:20:00.483092 systemd[1]: Started sshd@29-10.0.0.59:22-10.0.0.1:55346.service - OpenSSH per-connection server daemon (10.0.0.1:55346). Dec 16 03:20:00.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.59:22-10.0.0.1:55346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:20:00.484433 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:20:00.484536 kernel: audit: type=1130 audit(1765855200.482:948): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.59:22-10.0.0.1:55346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:20:00.569000 audit[5613]: USER_ACCT pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.572061 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 55346 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:20:00.573669 sshd-session[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:20:00.581386 kernel: audit: type=1101 audit(1765855200.569:949): pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.581549 kernel: audit: type=1103 audit(1765855200.571:950): pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.571000 audit[5613]: CRED_ACQ pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.581774 systemd-logind[1600]: New session 31 of user core. Dec 16 03:20:00.584652 kernel: audit: type=1006 audit(1765855200.571:951): pid=5613 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Dec 16 03:20:00.584706 kernel: audit: type=1300 audit(1765855200.571:951): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd859cbe10 a2=3 a3=0 items=0 ppid=1 pid=5613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:00.571000 audit[5613]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd859cbe10 a2=3 a3=0 items=0 ppid=1 pid=5613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:00.571000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:20:00.592288 kernel: audit: type=1327 audit(1765855200.571:951): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:20:00.594275 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 16 03:20:00.606272 kernel: audit: type=1105 audit(1765855200.598:952): pid=5613 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.598000 audit[5613]: USER_START pid=5613 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.613975 kernel: audit: type=1103 audit(1765855200.600:953): pid=5618 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.600000 audit[5618]: CRED_ACQ pid=5618 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.712092 sshd[5618]: Connection closed by 10.0.0.1 port 55346 Dec 16 03:20:00.712382 sshd-session[5613]: pam_unix(sshd:session): session closed for user core Dec 16 03:20:00.712000 audit[5613]: USER_END pid=5613 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.717695 systemd[1]: sshd@29-10.0.0.59:22-10.0.0.1:55346.service: Deactivated successfully. Dec 16 03:20:00.720096 systemd[1]: session-31.scope: Deactivated successfully. Dec 16 03:20:00.721362 systemd-logind[1600]: Session 31 logged out. Waiting for processes to exit. Dec 16 03:20:00.723006 systemd-logind[1600]: Removed session 31. Dec 16 03:20:00.712000 audit[5613]: CRED_DISP pid=5613 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.750050 kernel: audit: type=1106 audit(1765855200.712:954): pid=5613 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.750142 kernel: audit: type=1104 audit(1765855200.712:955): pid=5613 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:00.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.59:22-10.0.0.1:55346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:20:01.287284 kubelet[2859]: E1216 03:20:01.286857 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-hv8xn" podUID="9063305a-9611-4527-b78f-50cf768d4751" Dec 16 03:20:02.285651 kubelet[2859]: E1216 03:20:02.285594 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fml4k" podUID="10d0064d-4592-4240-a31b-0b629652a239" Dec 16 03:20:03.287017 kubelet[2859]: E1216 03:20:03.286678 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p7kjv" podUID="296bf1a5-e8cc-4c0a-a28b-66e55ce1e788" Dec 16 03:20:04.287081 kubelet[2859]: E1216 03:20:04.287011 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f875f69d-gkvpq" podUID="b257371e-206f-4ea1-9c38-a798a97242ea" Dec 16 03:20:05.287896 kubelet[2859]: E1216 03:20:05.287834 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d79fbfdc-v5h7c" podUID="ff2dd774-9ae6-475f-a701-99b3146ecb38" Dec 16 03:20:05.730680 systemd[1]: Started sshd@30-10.0.0.59:22-10.0.0.1:55464.service - OpenSSH per-connection server daemon (10.0.0.1:55464). Dec 16 03:20:05.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.59:22-10.0.0.1:55464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:20:05.744461 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:20:05.744596 kernel: audit: type=1130 audit(1765855205.729:957): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.59:22-10.0.0.1:55464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:20:05.805471 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 55464 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:20:05.804000 audit[5633]: USER_ACCT pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.808216 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:20:05.813793 systemd-logind[1600]: New session 32 of user core. Dec 16 03:20:05.805000 audit[5633]: CRED_ACQ pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.847930 kernel: audit: type=1101 audit(1765855205.804:958): pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.848090 kernel: audit: type=1103 audit(1765855205.805:959): pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.848119 kernel: audit: type=1006 audit(1765855205.805:960): pid=5633 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Dec 16 03:20:05.805000 audit[5633]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeced938a0 a2=3 a3=0 items=0 ppid=1 pid=5633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:05.857274 kernel: audit: type=1300 audit(1765855205.805:960): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeced938a0 a2=3 a3=0 items=0 ppid=1 pid=5633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:05.857337 kernel: audit: type=1327 audit(1765855205.805:960): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:20:05.805000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:20:05.865293 systemd[1]: Started session-32.scope - Session 32 of User core. Dec 16 03:20:05.868000 audit[5633]: USER_START pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.870000 audit[5638]: CRED_ACQ pid=5638 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.882338 kernel: audit: type=1105 audit(1765855205.868:961): pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.882406 kernel: audit: type=1103 audit(1765855205.870:962): pid=5638 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.960978 sshd[5638]: Connection closed by 10.0.0.1 port 55464 Dec 16 03:20:05.961799 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Dec 16 03:20:05.962000 audit[5633]: USER_END pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.968965 systemd[1]: sshd@30-10.0.0.59:22-10.0.0.1:55464.service: Deactivated successfully. Dec 16 03:20:05.971387 systemd[1]: session-32.scope: Deactivated successfully. Dec 16 03:20:05.971999 kernel: audit: type=1106 audit(1765855205.962:963): pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.972219 kernel: audit: type=1104 audit(1765855205.963:964): pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.963000 audit[5633]: CRED_DISP pid=5633 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:05.972383 systemd-logind[1600]: Session 32 logged out. Waiting for processes to exit. Dec 16 03:20:05.974103 systemd-logind[1600]: Removed session 32. Dec 16 03:20:05.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.59:22-10.0.0.1:55464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:20:08.286301 kubelet[2859]: E1216 03:20:08.286214 2859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-689f849d56-bpgq2" podUID="8653bc32-7498-477b-85bb-6a749d44ed95" Dec 16 03:20:10.981709 systemd[1]: Started sshd@31-10.0.0.59:22-10.0.0.1:50838.service - OpenSSH per-connection server daemon (10.0.0.1:50838). Dec 16 03:20:10.986014 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:20:10.986170 kernel: audit: type=1130 audit(1765855210.980:966): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.59:22-10.0.0.1:50838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:20:10.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.59:22-10.0.0.1:50838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:20:11.054000 audit[5655]: USER_ACCT pid=5655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.060858 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 50838 ssh2: RSA SHA256:v4ZY8oULw92NY49gbL/MJtnYQ4Sh3yIGVxqYs98TYTc Dec 16 03:20:11.062244 kernel: audit: type=1101 audit(1765855211.054:967): pid=5655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.062000 audit[5655]: CRED_ACQ pid=5655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.065006 sshd-session[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:20:11.069092 kernel: audit: type=1103 audit(1765855211.062:968): pid=5655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.062000 audit[5655]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad36d2b0 a2=3 a3=0 items=0 ppid=1 pid=5655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:11.074061 systemd-logind[1600]: New session 33 of user core. Dec 16 03:20:11.079627 kernel: audit: type=1006 audit(1765855211.062:969): pid=5655 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Dec 16 03:20:11.079764 kernel: audit: type=1300 audit(1765855211.062:969): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad36d2b0 a2=3 a3=0 items=0 ppid=1 pid=5655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:11.062000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:20:11.082030 kernel: audit: type=1327 audit(1765855211.062:969): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:20:11.084381 systemd[1]: Started session-33.scope - Session 33 of User core. Dec 16 03:20:11.087000 audit[5655]: USER_START pid=5655 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.091000 audit[5659]: CRED_ACQ pid=5659 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.101093 kernel: audit: type=1105 audit(1765855211.087:970): pid=5655 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.101230 kernel: audit: type=1103 audit(1765855211.091:971): pid=5659 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.177202 sshd[5659]: Connection closed by 10.0.0.1 port 50838 Dec 16 03:20:11.177988 sshd-session[5655]: pam_unix(sshd:session): session closed for user core Dec 16 03:20:11.178000 audit[5655]: USER_END pid=5655 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.183803 systemd[1]: sshd@31-10.0.0.59:22-10.0.0.1:50838.service: Deactivated successfully. Dec 16 03:20:11.178000 audit[5655]: CRED_DISP pid=5655 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.186328 systemd[1]: session-33.scope: Deactivated successfully. Dec 16 03:20:11.187201 systemd-logind[1600]: Session 33 logged out. Waiting for processes to exit. Dec 16 03:20:11.189014 systemd-logind[1600]: Removed session 33. Dec 16 03:20:11.267958 kernel: audit: type=1106 audit(1765855211.178:972): pid=5655 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.268039 kernel: audit: type=1104 audit(1765855211.178:973): pid=5655 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:20:11.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.59:22-10.0.0.1:50838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'