Dec 13 00:25:14.227248 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 20:55:10 -00 2025 Dec 13 00:25:14.227324 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:25:14.227344 kernel: BIOS-provided physical RAM map: Dec 13 00:25:14.227353 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 00:25:14.227362 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 00:25:14.227371 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 00:25:14.227382 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 00:25:14.227390 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 00:25:14.227399 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 00:25:14.227406 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 00:25:14.227418 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 00:25:14.227427 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 00:25:14.227436 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 00:25:14.227444 kernel: NX (Execute Disable) protection: active Dec 13 00:25:14.227454 kernel: APIC: Static calls initialized Dec 13 00:25:14.227477 kernel: SMBIOS 2.8 present. Dec 13 00:25:14.227489 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 00:25:14.227499 kernel: DMI: Memory slots populated: 1/1 Dec 13 00:25:14.227506 kernel: Hypervisor detected: KVM Dec 13 00:25:14.227514 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 00:25:14.227523 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 00:25:14.227531 kernel: kvm-clock: using sched offset of 3428961517 cycles Dec 13 00:25:14.227539 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 00:25:14.227548 kernel: tsc: Detected 2794.748 MHz processor Dec 13 00:25:14.227556 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 00:25:14.227570 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 00:25:14.227580 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 00:25:14.227590 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 00:25:14.227603 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 00:25:14.227615 kernel: Using GB pages for direct mapping Dec 13 00:25:14.227629 kernel: ACPI: Early table checksum verification disabled Dec 13 00:25:14.227640 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 00:25:14.227657 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:14.227668 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:14.227678 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:14.227689 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 00:25:14.227701 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:14.227709 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:14.227717 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:14.227729 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:25:14.227746 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Dec 13 00:25:14.227756 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Dec 13 00:25:14.227767 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 00:25:14.227777 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Dec 13 00:25:14.227807 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Dec 13 00:25:14.227817 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Dec 13 00:25:14.227828 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Dec 13 00:25:14.227838 kernel: No NUMA configuration found Dec 13 00:25:14.227849 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 00:25:14.227859 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Dec 13 00:25:14.227876 kernel: Zone ranges: Dec 13 00:25:14.227887 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 00:25:14.227906 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 00:25:14.227917 kernel: Normal empty Dec 13 00:25:14.227927 kernel: Device empty Dec 13 00:25:14.227938 kernel: Movable zone start for each node Dec 13 00:25:14.227948 kernel: Early memory node ranges Dec 13 00:25:14.227961 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 00:25:14.227975 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 00:25:14.227987 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 00:25:14.227998 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 00:25:14.228008 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 00:25:14.228021 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 00:25:14.228029 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 00:25:14.228038 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 00:25:14.228053 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 00:25:14.228062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 00:25:14.228070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 00:25:14.228079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 00:25:14.228087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 00:25:14.228096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 00:25:14.228104 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 00:25:14.228113 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 00:25:14.228126 kernel: TSC deadline timer available Dec 13 00:25:14.228135 kernel: CPU topo: Max. logical packages: 1 Dec 13 00:25:14.228143 kernel: CPU topo: Max. logical dies: 1 Dec 13 00:25:14.228151 kernel: CPU topo: Max. dies per package: 1 Dec 13 00:25:14.228159 kernel: CPU topo: Max. threads per core: 1 Dec 13 00:25:14.228168 kernel: CPU topo: Num. cores per package: 4 Dec 13 00:25:14.228176 kernel: CPU topo: Num. threads per package: 4 Dec 13 00:25:14.228189 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 13 00:25:14.228197 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 00:25:14.228205 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 00:25:14.228214 kernel: kvm-guest: setup PV sched yield Dec 13 00:25:14.228222 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 00:25:14.228231 kernel: Booting paravirtualized kernel on KVM Dec 13 00:25:14.228240 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 00:25:14.228248 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 00:25:14.228262 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 13 00:25:14.228270 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 13 00:25:14.228278 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 00:25:14.228287 kernel: kvm-guest: PV spinlocks enabled Dec 13 00:25:14.228295 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 00:25:14.228305 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:25:14.228314 kernel: random: crng init done Dec 13 00:25:14.228327 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 00:25:14.228336 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 00:25:14.228344 kernel: Fallback order for Node 0: 0 Dec 13 00:25:14.228352 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Dec 13 00:25:14.228361 kernel: Policy zone: DMA32 Dec 13 00:25:14.228369 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 00:25:14.228378 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 00:25:14.228391 kernel: ftrace: allocating 40103 entries in 157 pages Dec 13 00:25:14.228400 kernel: ftrace: allocated 157 pages with 5 groups Dec 13 00:25:14.228409 kernel: Dynamic Preempt: voluntary Dec 13 00:25:14.228417 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 00:25:14.228427 kernel: rcu: RCU event tracing is enabled. Dec 13 00:25:14.228435 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 00:25:14.228444 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 00:25:14.228457 kernel: Rude variant of Tasks RCU enabled. Dec 13 00:25:14.228466 kernel: Tracing variant of Tasks RCU enabled. Dec 13 00:25:14.228474 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 00:25:14.228482 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 00:25:14.228491 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:25:14.228500 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:25:14.228508 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:25:14.228517 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 00:25:14.228531 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 00:25:14.228554 kernel: Console: colour VGA+ 80x25 Dec 13 00:25:14.228567 kernel: printk: legacy console [ttyS0] enabled Dec 13 00:25:14.228576 kernel: ACPI: Core revision 20240827 Dec 13 00:25:14.228585 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 00:25:14.228594 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 00:25:14.228602 kernel: x2apic enabled Dec 13 00:25:14.228611 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 00:25:14.228621 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 00:25:14.228634 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 00:25:14.228643 kernel: kvm-guest: setup PV IPIs Dec 13 00:25:14.228652 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 00:25:14.228663 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:25:14.228680 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 00:25:14.228691 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 00:25:14.228702 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 00:25:14.228713 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 00:25:14.228724 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 00:25:14.228735 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 00:25:14.228746 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 13 00:25:14.228763 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 00:25:14.228774 kernel: active return thunk: retbleed_return_thunk Dec 13 00:25:14.228800 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 00:25:14.228811 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 00:25:14.228820 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 00:25:14.228838 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 00:25:14.228856 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 00:25:14.228872 kernel: active return thunk: srso_return_thunk Dec 13 00:25:14.228881 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 00:25:14.228897 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 00:25:14.228906 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 00:25:14.228915 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 00:25:14.228924 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 00:25:14.228933 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 00:25:14.228947 kernel: Freeing SMP alternatives memory: 32K Dec 13 00:25:14.228956 kernel: pid_max: default: 32768 minimum: 301 Dec 13 00:25:14.228965 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 13 00:25:14.228974 kernel: landlock: Up and running. Dec 13 00:25:14.228986 kernel: SELinux: Initializing. Dec 13 00:25:14.228998 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:25:14.229008 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:25:14.229022 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 00:25:14.229031 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 00:25:14.229040 kernel: ... version: 0 Dec 13 00:25:14.229048 kernel: ... bit width: 48 Dec 13 00:25:14.229057 kernel: ... generic registers: 6 Dec 13 00:25:14.229066 kernel: ... value mask: 0000ffffffffffff Dec 13 00:25:14.229075 kernel: ... max period: 00007fffffffffff Dec 13 00:25:14.229088 kernel: ... fixed-purpose events: 0 Dec 13 00:25:14.229096 kernel: ... event mask: 000000000000003f Dec 13 00:25:14.229105 kernel: signal: max sigframe size: 1776 Dec 13 00:25:14.229114 kernel: rcu: Hierarchical SRCU implementation. Dec 13 00:25:14.229123 kernel: rcu: Max phase no-delay instances is 400. Dec 13 00:25:14.229132 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 13 00:25:14.229141 kernel: smp: Bringing up secondary CPUs ... Dec 13 00:25:14.229154 kernel: smpboot: x86: Booting SMP configuration: Dec 13 00:25:14.229163 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 00:25:14.229172 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 00:25:14.229180 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 00:25:14.229189 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15596K init, 2444K bss, 120520K reserved, 0K cma-reserved) Dec 13 00:25:14.229198 kernel: devtmpfs: initialized Dec 13 00:25:14.229207 kernel: x86/mm: Memory block size: 128MB Dec 13 00:25:14.229257 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 00:25:14.229267 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 00:25:14.229276 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 00:25:14.229284 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 00:25:14.229293 kernel: audit: initializing netlink subsys (disabled) Dec 13 00:25:14.229302 kernel: audit: type=2000 audit(1765585511.191:1): state=initialized audit_enabled=0 res=1 Dec 13 00:25:14.229310 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 00:25:14.229325 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 00:25:14.229333 kernel: cpuidle: using governor menu Dec 13 00:25:14.229342 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 00:25:14.229351 kernel: dca service started, version 1.12.1 Dec 13 00:25:14.229360 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 13 00:25:14.229368 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 00:25:14.229377 kernel: PCI: Using configuration type 1 for base access Dec 13 00:25:14.229391 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 00:25:14.229399 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 00:25:14.229408 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 00:25:14.229417 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 00:25:14.229426 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 00:25:14.229435 kernel: ACPI: Added _OSI(Module Device) Dec 13 00:25:14.229443 kernel: ACPI: Added _OSI(Processor Device) Dec 13 00:25:14.229456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 00:25:14.229465 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 00:25:14.229474 kernel: ACPI: Interpreter enabled Dec 13 00:25:14.229483 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 00:25:14.229491 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 00:25:14.229500 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 00:25:14.229509 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 00:25:14.229518 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 00:25:14.229531 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 00:25:14.229838 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 00:25:14.230060 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 00:25:14.230233 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 00:25:14.230245 kernel: PCI host bridge to bus 0000:00 Dec 13 00:25:14.230424 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 00:25:14.230579 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 00:25:14.230746 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 00:25:14.230937 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 00:25:14.231099 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 00:25:14.231313 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 00:25:14.231478 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 00:25:14.231669 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 13 00:25:14.231861 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 13 00:25:14.232060 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 13 00:25:14.232229 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 13 00:25:14.232402 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 13 00:25:14.232635 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 00:25:14.232829 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 13 00:25:14.233019 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Dec 13 00:25:14.233221 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 13 00:25:14.233400 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 00:25:14.233589 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 13 00:25:14.233760 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Dec 13 00:25:14.233955 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 13 00:25:14.234135 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 00:25:14.234323 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 13 00:25:14.234499 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Dec 13 00:25:14.234667 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Dec 13 00:25:14.234861 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 00:25:14.235046 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 13 00:25:14.235236 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 13 00:25:14.235405 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 00:25:14.235586 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 13 00:25:14.235754 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Dec 13 00:25:14.236010 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Dec 13 00:25:14.236188 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 13 00:25:14.236353 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 13 00:25:14.236365 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 00:25:14.236378 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 00:25:14.236387 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 00:25:14.236396 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 00:25:14.236404 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 00:25:14.236413 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 00:25:14.236422 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 00:25:14.236430 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 00:25:14.236445 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 00:25:14.236453 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 00:25:14.236462 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 00:25:14.236470 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 00:25:14.236479 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 00:25:14.236488 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 00:25:14.236496 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 00:25:14.236509 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 00:25:14.236518 kernel: iommu: Default domain type: Translated Dec 13 00:25:14.236527 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 00:25:14.236535 kernel: PCI: Using ACPI for IRQ routing Dec 13 00:25:14.236544 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 00:25:14.236553 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 00:25:14.236561 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 00:25:14.236737 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 00:25:14.236927 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 00:25:14.237093 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 00:25:14.237109 kernel: vgaarb: loaded Dec 13 00:25:14.237120 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 00:25:14.237129 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 00:25:14.237138 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 00:25:14.237155 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 00:25:14.237163 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 00:25:14.237172 kernel: pnp: PnP ACPI init Dec 13 00:25:14.237353 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 00:25:14.237366 kernel: pnp: PnP ACPI: found 6 devices Dec 13 00:25:14.237375 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 00:25:14.237390 kernel: NET: Registered PF_INET protocol family Dec 13 00:25:14.237399 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 00:25:14.237408 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 00:25:14.237417 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 00:25:14.237426 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 00:25:14.237435 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 00:25:14.237443 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 00:25:14.237456 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:25:14.237465 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:25:14.237474 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 00:25:14.237483 kernel: NET: Registered PF_XDP protocol family Dec 13 00:25:14.237636 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 00:25:14.237818 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 00:25:14.238011 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 00:25:14.238183 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 00:25:14.238338 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 00:25:14.238490 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 00:25:14.238501 kernel: PCI: CLS 0 bytes, default 64 Dec 13 00:25:14.238511 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:25:14.238519 kernel: Initialise system trusted keyrings Dec 13 00:25:14.238528 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 00:25:14.238544 kernel: Key type asymmetric registered Dec 13 00:25:14.238553 kernel: Asymmetric key parser 'x509' registered Dec 13 00:25:14.238562 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 00:25:14.238571 kernel: io scheduler mq-deadline registered Dec 13 00:25:14.238580 kernel: io scheduler kyber registered Dec 13 00:25:14.238589 kernel: io scheduler bfq registered Dec 13 00:25:14.238597 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 00:25:14.238611 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 00:25:14.238620 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 00:25:14.238629 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 00:25:14.238638 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 00:25:14.238646 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 00:25:14.238655 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 00:25:14.238664 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 00:25:14.238678 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 00:25:14.238898 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 00:25:14.238911 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 00:25:14.239095 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 00:25:14.239442 kernel: rtc_cmos 00:04: setting system clock to 2025-12-13T00:25:12 UTC (1765585512) Dec 13 00:25:14.240367 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 00:25:14.240388 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 00:25:14.240397 kernel: NET: Registered PF_INET6 protocol family Dec 13 00:25:14.240406 kernel: Segment Routing with IPv6 Dec 13 00:25:14.240415 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 00:25:14.240423 kernel: NET: Registered PF_PACKET protocol family Dec 13 00:25:14.240432 kernel: Key type dns_resolver registered Dec 13 00:25:14.240441 kernel: IPI shorthand broadcast: enabled Dec 13 00:25:14.240450 kernel: sched_clock: Marking stable (1730012491, 239153311)->(2042976330, -73810528) Dec 13 00:25:14.240461 kernel: registered taskstats version 1 Dec 13 00:25:14.240470 kernel: Loading compiled-in X.509 certificates Dec 13 00:25:14.240479 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 199a9f6885410acbf0a1b178e5562253352ca03c' Dec 13 00:25:14.240488 kernel: Demotion targets for Node 0: null Dec 13 00:25:14.240496 kernel: Key type .fscrypt registered Dec 13 00:25:14.240505 kernel: Key type fscrypt-provisioning registered Dec 13 00:25:14.240514 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 00:25:14.240527 kernel: ima: Allocated hash algorithm: sha1 Dec 13 00:25:14.240536 kernel: ima: No architecture policies found Dec 13 00:25:14.240545 kernel: clk: Disabling unused clocks Dec 13 00:25:14.240554 kernel: Freeing unused kernel image (initmem) memory: 15596K Dec 13 00:25:14.240563 kernel: Write protecting the kernel read-only data: 47104k Dec 13 00:25:14.240572 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 13 00:25:14.240581 kernel: Run /init as init process Dec 13 00:25:14.240594 kernel: with arguments: Dec 13 00:25:14.240602 kernel: /init Dec 13 00:25:14.240611 kernel: with environment: Dec 13 00:25:14.240619 kernel: HOME=/ Dec 13 00:25:14.240628 kernel: TERM=linux Dec 13 00:25:14.240637 kernel: SCSI subsystem initialized Dec 13 00:25:14.240646 kernel: libata version 3.00 loaded. Dec 13 00:25:14.240832 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 00:25:14.240871 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 00:25:14.241122 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 13 00:25:14.241291 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 13 00:25:14.241504 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 00:25:14.241764 kernel: scsi host0: ahci Dec 13 00:25:14.241972 kernel: scsi host1: ahci Dec 13 00:25:14.242151 kernel: scsi host2: ahci Dec 13 00:25:14.242329 kernel: scsi host3: ahci Dec 13 00:25:14.242587 kernel: scsi host4: ahci Dec 13 00:25:14.243475 kernel: scsi host5: ahci Dec 13 00:25:14.243499 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Dec 13 00:25:14.243509 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Dec 13 00:25:14.243518 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Dec 13 00:25:14.243528 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Dec 13 00:25:14.243537 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Dec 13 00:25:14.243546 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Dec 13 00:25:14.243555 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 00:25:14.243570 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:14.243579 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:14.243588 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:14.243599 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:14.243608 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:25:14.243617 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 00:25:14.243626 kernel: ata3.00: applying bridge limits Dec 13 00:25:14.243640 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 00:25:14.243649 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:25:14.243658 kernel: ata3.00: configured for UDMA/100 Dec 13 00:25:14.243908 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 00:25:14.244091 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 00:25:14.244256 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 13 00:25:14.244276 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 00:25:14.244286 kernel: GPT:16515071 != 27000831 Dec 13 00:25:14.244295 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 00:25:14.244304 kernel: GPT:16515071 != 27000831 Dec 13 00:25:14.244313 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 00:25:14.244321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 00:25:14.244505 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 00:25:14.244524 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 00:25:14.244705 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 00:25:14.244718 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 00:25:14.244727 kernel: device-mapper: uevent: version 1.0.3 Dec 13 00:25:14.244736 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 13 00:25:14.244746 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 13 00:25:14.244758 kernel: raid6: avx2x4 gen() 28936 MB/s Dec 13 00:25:14.244767 kernel: raid6: avx2x2 gen() 30225 MB/s Dec 13 00:25:14.244776 kernel: raid6: avx2x1 gen() 24102 MB/s Dec 13 00:25:14.244799 kernel: raid6: using algorithm avx2x2 gen() 30225 MB/s Dec 13 00:25:14.244808 kernel: raid6: .... xor() 19656 MB/s, rmw enabled Dec 13 00:25:14.244820 kernel: raid6: using avx2x2 recovery algorithm Dec 13 00:25:14.244830 kernel: xor: automatically using best checksumming function avx Dec 13 00:25:14.244839 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 00:25:14.244848 kernel: BTRFS: device fsid 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (182) Dec 13 00:25:14.244858 kernel: BTRFS info (device dm-0): first mount of filesystem 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d Dec 13 00:25:14.244867 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:25:14.244876 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 00:25:14.244899 kernel: BTRFS info (device dm-0): enabling free space tree Dec 13 00:25:14.244908 kernel: loop: module loaded Dec 13 00:25:14.244916 kernel: loop0: detected capacity change from 0 to 100528 Dec 13 00:25:14.244925 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 00:25:14.244939 systemd[1]: Successfully made /usr/ read-only. Dec 13 00:25:14.244951 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:25:14.244966 systemd[1]: Detected virtualization kvm. Dec 13 00:25:14.244975 systemd[1]: Detected architecture x86-64. Dec 13 00:25:14.244984 systemd[1]: Running in initrd. Dec 13 00:25:14.244994 systemd[1]: No hostname configured, using default hostname. Dec 13 00:25:14.245004 systemd[1]: Hostname set to . Dec 13 00:25:14.245013 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:25:14.245022 systemd[1]: Queued start job for default target initrd.target. Dec 13 00:25:14.245037 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:25:14.245046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:25:14.245056 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:25:14.245066 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 00:25:14.245075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:25:14.245088 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 00:25:14.245098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 00:25:14.245107 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:25:14.245117 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:25:14.245126 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:25:14.245136 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:25:14.245147 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:25:14.245161 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:25:14.245171 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:25:14.245180 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:25:14.245189 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:25:14.245199 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:25:14.245209 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 00:25:14.245218 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 13 00:25:14.245232 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:25:14.245242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:25:14.245251 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:25:14.245261 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:25:14.245271 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 00:25:14.245281 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 00:25:14.245295 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:25:14.245304 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 00:25:14.245314 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 13 00:25:14.245324 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 00:25:14.245333 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:25:14.245343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:25:14.245355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:14.245365 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 00:25:14.245374 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:25:14.245384 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 00:25:14.245432 systemd-journald[317]: Collecting audit messages is enabled. Dec 13 00:25:14.245458 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 00:25:14.245468 systemd-journald[317]: Journal started Dec 13 00:25:14.245493 systemd-journald[317]: Runtime Journal (/run/log/journal/d494a881426e4729b90e1c4b43b4a06e) is 6M, max 48.2M, 42.1M free. Dec 13 00:25:14.257814 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:25:14.257857 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 00:25:14.261308 systemd-modules-load[320]: Inserted module 'br_netfilter' Dec 13 00:25:14.340977 kernel: Bridge firewalling registered Dec 13 00:25:14.341017 kernel: audit: type=1130 audit(1765585514.334:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.336629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:25:14.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.348809 kernel: audit: type=1130 audit(1765585514.343:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.348882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:14.356779 kernel: audit: type=1130 audit(1765585514.350:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.356858 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:25:14.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.365820 kernel: audit: type=1130 audit(1765585514.360:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.366061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 00:25:14.370162 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:25:14.374373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:25:14.384399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:25:14.396232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:25:14.407236 kernel: audit: type=1130 audit(1765585514.396:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.397225 systemd-tmpfiles[339]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 13 00:25:14.412821 kernel: audit: type=1130 audit(1765585514.407:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.405086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:25:14.421069 kernel: audit: type=1130 audit(1765585514.412:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.408314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:25:14.421268 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:25:14.430405 kernel: audit: type=1130 audit(1765585514.421:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.428168 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 00:25:14.434814 kernel: audit: type=1334 audit(1765585514.431:10): prog-id=6 op=LOAD Dec 13 00:25:14.431000 audit: BPF prog-id=6 op=LOAD Dec 13 00:25:14.434451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:25:14.459745 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:25:14.501774 systemd-resolved[357]: Positive Trust Anchors: Dec 13 00:25:14.501809 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:25:14.501813 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:25:14.501843 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:25:14.541646 systemd-resolved[357]: Defaulting to hostname 'linux'. Dec 13 00:25:14.542937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:25:14.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.547288 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:25:14.613813 kernel: Loading iSCSI transport class v2.0-870. Dec 13 00:25:14.626806 kernel: iscsi: registered transport (tcp) Dec 13 00:25:14.653851 kernel: iscsi: registered transport (qla4xxx) Dec 13 00:25:14.653888 kernel: QLogic iSCSI HBA Driver Dec 13 00:25:14.681899 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:25:14.702201 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:25:14.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.703473 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:25:14.774372 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 00:25:14.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.776976 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 00:25:14.779159 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 00:25:14.824779 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:25:14.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.827000 audit: BPF prog-id=7 op=LOAD Dec 13 00:25:14.827000 audit: BPF prog-id=8 op=LOAD Dec 13 00:25:14.828533 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:25:14.860085 systemd-udevd[597]: Using default interface naming scheme 'v257'. Dec 13 00:25:14.875025 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:25:14.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.880435 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 00:25:14.908464 dracut-pre-trigger[652]: rd.md=0: removing MD RAID activation Dec 13 00:25:14.923914 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:25:14.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.925000 audit: BPF prog-id=9 op=LOAD Dec 13 00:25:14.927134 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:25:14.953200 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:25:14.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.957134 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:25:14.995305 systemd-networkd[717]: lo: Link UP Dec 13 00:25:14.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:14.995315 systemd-networkd[717]: lo: Gained carrier Dec 13 00:25:14.996136 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:25:14.999041 systemd[1]: Reached target network.target - Network. Dec 13 00:25:15.063156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:25:15.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:15.069934 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 00:25:15.133066 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 00:25:15.168886 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 00:25:15.172907 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 00:25:15.185374 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 00:25:15.195812 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 13 00:25:15.203306 kernel: AES CTR mode by8 optimization enabled Dec 13 00:25:15.202620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:25:15.210644 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 00:25:15.213343 systemd-networkd[717]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:15.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:15.213351 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:25:15.213768 systemd-networkd[717]: eth0: Link UP Dec 13 00:25:15.214835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:25:15.215003 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:15.216132 systemd-networkd[717]: eth0: Gained carrier Dec 13 00:25:15.216142 systemd-networkd[717]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:15.219257 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:15.226191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:15.243370 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:25:15.259143 disk-uuid[834]: Primary Header is updated. Dec 13 00:25:15.259143 disk-uuid[834]: Secondary Entries is updated. Dec 13 00:25:15.259143 disk-uuid[834]: Secondary Header is updated. Dec 13 00:25:15.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:15.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:15.261846 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 00:25:15.342224 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:15.379016 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:25:15.382994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:25:15.386685 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:25:15.391356 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 00:25:15.423515 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:25:15.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.309744 disk-uuid[838]: Warning: The kernel is still using the old partition table. Dec 13 00:25:16.309744 disk-uuid[838]: The new table will be used at the next reboot or after you Dec 13 00:25:16.309744 disk-uuid[838]: run partprobe(8) or kpartx(8) Dec 13 00:25:16.309744 disk-uuid[838]: The operation has completed successfully. Dec 13 00:25:16.323176 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 00:25:16.323314 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 00:25:16.338699 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 13 00:25:16.338769 kernel: audit: type=1130 audit(1765585516.327:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.338804 kernel: audit: type=1131 audit(1765585516.327:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.328197 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 00:25:16.373841 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Dec 13 00:25:16.373912 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:16.373931 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:25:16.379200 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:25:16.379250 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:25:16.387815 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:16.388776 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 00:25:16.396184 kernel: audit: type=1130 audit(1765585516.388:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.390618 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 00:25:16.519201 ignition[878]: Ignition 2.24.0 Dec 13 00:25:16.519217 ignition[878]: Stage: fetch-offline Dec 13 00:25:16.519278 ignition[878]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:16.519291 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:16.519385 ignition[878]: parsed url from cmdline: "" Dec 13 00:25:16.519389 ignition[878]: no config URL provided Dec 13 00:25:16.519394 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 00:25:16.519405 ignition[878]: no config at "/usr/lib/ignition/user.ign" Dec 13 00:25:16.519452 ignition[878]: op(1): [started] loading QEMU firmware config module Dec 13 00:25:16.519457 ignition[878]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 00:25:16.531987 ignition[878]: op(1): [finished] loading QEMU firmware config module Dec 13 00:25:16.684697 ignition[878]: parsing config with SHA512: 323fbdc8e48df187a8aa70afd00988c4f58820b39fff65bc2ee2289991748e9e5a42bd34431b637afb1ae8dcc8acebd5db4d7b70163e1669b859ca0a02bc7b8b Dec 13 00:25:16.688218 unknown[878]: fetched base config from "system" Dec 13 00:25:16.688231 unknown[878]: fetched user config from "qemu" Dec 13 00:25:16.771692 kernel: audit: type=1130 audit(1765585516.763:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.688578 ignition[878]: fetch-offline: fetch-offline passed Dec 13 00:25:16.691690 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:25:16.688628 ignition[878]: Ignition finished successfully Dec 13 00:25:16.764718 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 00:25:16.765976 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 00:25:16.799803 ignition[888]: Ignition 2.24.0 Dec 13 00:25:16.799829 ignition[888]: Stage: kargs Dec 13 00:25:16.800032 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:16.800047 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:16.801143 ignition[888]: kargs: kargs passed Dec 13 00:25:16.801193 ignition[888]: Ignition finished successfully Dec 13 00:25:16.808224 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 00:25:16.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.813554 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 00:25:16.819167 kernel: audit: type=1130 audit(1765585516.811:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.845289 ignition[895]: Ignition 2.24.0 Dec 13 00:25:16.845304 ignition[895]: Stage: disks Dec 13 00:25:16.845470 ignition[895]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:16.845480 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:16.846397 ignition[895]: disks: disks passed Dec 13 00:25:16.857247 kernel: audit: type=1130 audit(1765585516.850:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.850801 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 00:25:16.846452 ignition[895]: Ignition finished successfully Dec 13 00:25:16.852078 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 00:25:16.859133 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 00:25:16.860302 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:25:16.864245 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:25:16.864833 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:25:16.874665 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 00:25:16.920539 systemd-fsck[904]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 13 00:25:16.929019 systemd-networkd[717]: eth0: Gained IPv6LL Dec 13 00:25:16.929841 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 00:25:16.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.939814 kernel: audit: type=1130 audit(1765585516.934:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.943378 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 00:25:17.076819 kernel: EXT4-fs (vda9): mounted filesystem fc518408-2cc6-461e-9cc3-fcafcb4d05ba r/w with ordered data mode. Quota mode: none. Dec 13 00:25:17.077177 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 00:25:17.078759 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 00:25:17.083114 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:25:17.085960 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 00:25:17.088463 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 00:25:17.088502 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 00:25:17.088530 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:25:17.110233 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (912) Dec 13 00:25:17.110276 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:17.110303 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:25:17.110325 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:25:17.110354 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:25:17.097493 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 00:25:17.104359 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 00:25:17.114041 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:25:17.290734 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 00:25:17.299313 kernel: audit: type=1130 audit(1765585517.291:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:17.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:17.293925 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 00:25:17.303371 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 00:25:17.312828 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:17.329684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 00:25:17.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:17.337853 kernel: audit: type=1130 audit(1765585517.331:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:17.338832 ignition[1010]: INFO : Ignition 2.24.0 Dec 13 00:25:17.338832 ignition[1010]: INFO : Stage: mount Dec 13 00:25:17.341529 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:17.341529 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:17.341529 ignition[1010]: INFO : mount: mount passed Dec 13 00:25:17.341529 ignition[1010]: INFO : Ignition finished successfully Dec 13 00:25:17.346515 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 00:25:17.353880 kernel: audit: type=1130 audit(1765585517.346:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:17.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:17.355216 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 00:25:17.362156 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 00:25:17.383238 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:25:17.419947 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Dec 13 00:25:17.420002 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:25:17.420017 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:25:17.425128 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:25:17.425150 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:25:17.426840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:25:17.456848 ignition[1039]: INFO : Ignition 2.24.0 Dec 13 00:25:17.456848 ignition[1039]: INFO : Stage: files Dec 13 00:25:17.459535 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:17.459535 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:17.459535 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Dec 13 00:25:17.465500 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 00:25:17.465500 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 00:25:17.472630 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 00:25:17.475093 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 00:25:17.475093 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 00:25:17.473354 unknown[1039]: wrote ssh authorized keys file for user: core Dec 13 00:25:17.481362 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 13 00:25:17.481362 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 13 00:25:17.533481 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 00:25:17.599337 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 13 00:25:17.599337 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 00:25:17.605566 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 00:25:17.605566 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:25:17.605566 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:25:17.605566 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:25:17.605566 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:25:17.605566 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:25:17.605566 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:25:17.625623 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:25:17.625623 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:25:17.625623 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 13 00:25:17.625623 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 13 00:25:17.625623 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 13 00:25:17.625623 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 13 00:25:18.057693 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 00:25:18.532140 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 13 00:25:18.532140 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 00:25:18.538853 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:25:18.538853 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:25:18.538853 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 00:25:18.538853 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 00:25:18.538853 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:25:18.538853 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:25:18.538853 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 00:25:18.538853 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 00:25:18.577156 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:25:18.586419 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:25:18.588955 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 00:25:18.588955 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 00:25:18.588955 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 00:25:18.588955 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:25:18.588955 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:25:18.588955 ignition[1039]: INFO : files: files passed Dec 13 00:25:18.588955 ignition[1039]: INFO : Ignition finished successfully Dec 13 00:25:18.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.592836 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 00:25:18.600766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 00:25:18.607647 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 00:25:18.619958 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 00:25:18.620083 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 00:25:18.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.628401 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 00:25:18.634043 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:25:18.636639 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:25:18.636639 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:25:18.643158 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:25:18.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.644482 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 00:25:18.651680 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 00:25:18.730043 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 00:25:18.730177 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 00:25:18.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.735512 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 00:25:18.736183 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 00:25:18.737020 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 00:25:18.738020 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 00:25:18.769775 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:25:18.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.772233 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 00:25:18.806569 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:25:18.807033 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:25:18.811195 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:25:18.812463 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 00:25:18.817837 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 00:25:18.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.817973 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:25:18.824191 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 00:25:18.825266 systemd[1]: Stopped target basic.target - Basic System. Dec 13 00:25:18.828885 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 00:25:18.832262 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:25:18.835813 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 00:25:18.839721 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:25:18.843734 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 00:25:18.847613 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:25:18.851433 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 00:25:18.855365 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 00:25:18.860898 systemd[1]: Stopped target swap.target - Swaps. Dec 13 00:25:18.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.862258 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 00:25:18.862402 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:25:18.869281 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:25:18.873225 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:25:18.874550 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 00:25:18.874726 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:25:18.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.881298 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 00:25:18.881422 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 00:25:18.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.888455 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 00:25:18.888580 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:25:18.892620 systemd[1]: Stopped target paths.target - Path Units. Dec 13 00:25:18.896277 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 00:25:18.899906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:25:18.900865 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 00:25:18.906966 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 00:25:18.912683 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 00:25:18.912860 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:25:18.918339 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 00:25:18.918477 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:25:18.919554 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 13 00:25:18.919668 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:25:18.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.933699 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 00:25:18.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.933927 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:25:18.934725 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 00:25:18.934899 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 00:25:18.944090 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 00:25:18.952456 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 00:25:18.953308 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 00:25:18.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.953425 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:25:18.958515 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 00:25:18.958651 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:25:18.962339 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 00:25:18.962493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:25:18.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.976495 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 00:25:18.976613 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 00:25:18.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.990068 ignition[1096]: INFO : Ignition 2.24.0 Dec 13 00:25:18.990068 ignition[1096]: INFO : Stage: umount Dec 13 00:25:18.992809 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:25:18.992809 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:25:18.992809 ignition[1096]: INFO : umount: umount passed Dec 13 00:25:18.992809 ignition[1096]: INFO : Ignition finished successfully Dec 13 00:25:19.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:18.998900 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 00:25:18.999037 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 00:25:19.001527 systemd[1]: Stopped target network.target - Network. Dec 13 00:25:19.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.002219 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 00:25:19.002275 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 00:25:19.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.002897 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 00:25:19.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.002944 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 00:25:19.010737 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 00:25:19.010861 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 00:25:19.016552 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 00:25:19.016608 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 00:25:19.017397 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 00:25:19.021646 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 00:25:19.026015 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 00:25:19.040933 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 00:25:19.041133 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 00:25:19.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.047000 audit: BPF prog-id=6 op=UNLOAD Dec 13 00:25:19.047902 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 00:25:19.048034 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 00:25:19.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.055056 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 13 00:25:19.055736 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 00:25:19.054000 audit: BPF prog-id=9 op=UNLOAD Dec 13 00:25:19.055805 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:25:19.062336 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 00:25:19.065372 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 00:25:19.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.065430 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:25:19.068852 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 00:25:19.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.070470 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:25:19.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.074934 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 00:25:19.075010 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 00:25:19.075701 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:25:19.092950 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 00:25:19.093148 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 00:25:19.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.096160 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 00:25:19.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.096211 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 00:25:19.104564 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 00:25:19.104772 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:25:19.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.105823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 00:25:19.105873 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 00:25:19.110482 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 00:25:19.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.110535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:25:19.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.113750 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 00:25:19.113834 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:25:19.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.120127 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 00:25:19.120184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 00:25:19.123495 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 00:25:19.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.123552 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:25:19.129303 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 00:25:19.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.131674 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 13 00:25:19.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.131730 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:25:19.132387 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 00:25:19.132437 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:25:19.133194 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 00:25:19.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.133237 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:25:19.139155 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 00:25:19.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:19.139204 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:25:19.139681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:25:19.139726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:19.162509 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 00:25:19.162705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 00:25:19.202224 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 00:25:19.202346 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 00:25:19.207436 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 00:25:19.209147 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 00:25:19.224825 systemd[1]: Switching root. Dec 13 00:25:19.257230 systemd-journald[317]: Journal stopped Dec 13 00:25:21.003567 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Dec 13 00:25:21.003644 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 00:25:21.003721 kernel: SELinux: policy capability open_perms=1 Dec 13 00:25:21.003735 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 00:25:21.003747 kernel: SELinux: policy capability always_check_network=0 Dec 13 00:25:21.003759 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 00:25:21.003776 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 00:25:21.003810 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 00:25:21.003824 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 00:25:21.003845 kernel: SELinux: policy capability userspace_initial_context=0 Dec 13 00:25:21.003863 systemd[1]: Successfully loaded SELinux policy in 66.409ms. Dec 13 00:25:21.003886 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.030ms. Dec 13 00:25:21.003900 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:25:21.003913 systemd[1]: Detected virtualization kvm. Dec 13 00:25:21.003928 systemd[1]: Detected architecture x86-64. Dec 13 00:25:21.003941 systemd[1]: Detected first boot. Dec 13 00:25:21.003961 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:25:21.003979 zram_generator::config[1139]: No configuration found. Dec 13 00:25:21.003998 kernel: Guest personality initialized and is inactive Dec 13 00:25:21.004010 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 13 00:25:21.004023 kernel: Initialized host personality Dec 13 00:25:21.004035 kernel: NET: Registered PF_VSOCK protocol family Dec 13 00:25:21.004047 systemd[1]: Populated /etc with preset unit settings. Dec 13 00:25:21.004068 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 00:25:21.004081 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 00:25:21.004094 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 00:25:21.004112 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 00:25:21.004126 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 00:25:21.004139 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 00:25:21.004158 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 00:25:21.004178 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 00:25:21.004191 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 00:25:21.004205 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 00:25:21.004217 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 00:25:21.004230 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:25:21.004245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:25:21.004258 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 00:25:21.004278 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 00:25:21.004292 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 00:25:21.004305 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:25:21.004318 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 00:25:21.004331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:25:21.004344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:25:21.004364 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 00:25:21.004377 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 00:25:21.004392 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 00:25:21.004405 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 00:25:21.004418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:25:21.004431 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:25:21.004445 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 13 00:25:21.004464 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:25:21.004478 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:25:21.004490 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 00:25:21.004503 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 00:25:21.004527 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 13 00:25:21.004541 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:25:21.004556 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 13 00:25:21.004576 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:25:21.004589 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 13 00:25:21.004602 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 13 00:25:21.004615 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:25:21.004628 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:25:21.004641 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 00:25:21.004654 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 00:25:21.004667 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 00:25:21.004688 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 00:25:21.004701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:21.004725 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 00:25:21.004743 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 00:25:21.004761 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 00:25:21.004776 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 00:25:21.004810 systemd[1]: Reached target machines.target - Containers. Dec 13 00:25:21.004829 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 00:25:21.004845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:21.004863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:25:21.004878 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 00:25:21.004891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:21.004905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:25:21.004927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:21.004940 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 00:25:21.004953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:21.004966 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 00:25:21.004983 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 00:25:21.004996 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 00:25:21.005009 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 00:25:21.005028 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 00:25:21.005042 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:21.005055 kernel: fuse: init (API version 7.41) Dec 13 00:25:21.005074 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:25:21.005086 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:25:21.005099 kernel: ACPI: bus type drm_connector registered Dec 13 00:25:21.005111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:25:21.005124 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 00:25:21.005139 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 13 00:25:21.005152 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:25:21.005184 systemd-journald[1220]: Collecting audit messages is enabled. Dec 13 00:25:21.005267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:21.005281 systemd-journald[1220]: Journal started Dec 13 00:25:21.005305 systemd-journald[1220]: Runtime Journal (/run/log/journal/d494a881426e4729b90e1c4b43b4a06e) is 6M, max 48.2M, 42.1M free. Dec 13 00:25:20.755000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 00:25:20.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:20.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:20.944000 audit: BPF prog-id=14 op=UNLOAD Dec 13 00:25:20.944000 audit: BPF prog-id=13 op=UNLOAD Dec 13 00:25:20.949000 audit: BPF prog-id=15 op=LOAD Dec 13 00:25:20.950000 audit: BPF prog-id=16 op=LOAD Dec 13 00:25:20.950000 audit: BPF prog-id=17 op=LOAD Dec 13 00:25:21.001000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 00:25:21.001000 audit[1220]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe39ff8030 a2=4000 a3=0 items=0 ppid=1 pid=1220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:21.001000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 00:25:20.622601 systemd[1]: Queued start job for default target multi-user.target. Dec 13 00:25:20.638758 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 00:25:20.639289 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 00:25:21.012582 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:25:21.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.013885 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 00:25:21.015823 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 00:25:21.017885 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 00:25:21.019754 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 00:25:21.021878 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 00:25:21.023732 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 00:25:21.025739 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 00:25:21.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.028055 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:25:21.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.030607 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 00:25:21.030888 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 00:25:21.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.033158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:21.033414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:21.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.035672 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:25:21.035969 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:25:21.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.038094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:21.038350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:21.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.040862 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 00:25:21.041119 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 00:25:21.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.043362 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:21.043617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:21.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.045993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:25:21.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.048409 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:25:21.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.052115 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 00:25:21.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.054596 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 13 00:25:21.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.070523 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:25:21.072761 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 13 00:25:21.076431 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 00:25:21.079682 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 00:25:21.081703 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 00:25:21.081751 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:25:21.084699 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 13 00:25:21.087591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:21.087911 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:21.091317 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 00:25:21.094409 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 00:25:21.096512 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:25:21.098117 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 00:25:21.100033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:25:21.103989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:25:21.107974 systemd-journald[1220]: Time spent on flushing to /var/log/journal/d494a881426e4729b90e1c4b43b4a06e is 35.810ms for 1097 entries. Dec 13 00:25:21.107974 systemd-journald[1220]: System Journal (/var/log/journal/d494a881426e4729b90e1c4b43b4a06e) is 8M, max 163.5M, 155.5M free. Dec 13 00:25:21.174027 systemd-journald[1220]: Received client request to flush runtime journal. Dec 13 00:25:21.174298 kernel: loop1: detected capacity change from 0 to 229808 Dec 13 00:25:21.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.108958 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 00:25:21.132145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 00:25:21.138919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:25:21.140018 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 00:25:21.146057 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 00:25:21.148676 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 00:25:21.156881 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 00:25:21.160545 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 13 00:25:21.177145 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 00:25:21.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.191246 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Dec 13 00:25:21.191266 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Dec 13 00:25:21.194291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:25:21.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.198372 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:25:21.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.202933 kernel: loop2: detected capacity change from 0 to 171112 Dec 13 00:25:21.205460 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 00:25:21.207137 kernel: loop2: p1 p2 p3 Dec 13 00:25:21.217837 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 13 00:25:21.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.246278 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Dec 13 00:25:21.254340 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 00:25:21.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.258000 audit: BPF prog-id=18 op=LOAD Dec 13 00:25:21.258000 audit: BPF prog-id=19 op=LOAD Dec 13 00:25:21.258000 audit: BPF prog-id=20 op=LOAD Dec 13 00:25:21.260483 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 13 00:25:21.262000 audit: BPF prog-id=21 op=LOAD Dec 13 00:25:21.265907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:25:21.268696 kernel: loop3: detected capacity change from 0 to 375256 Dec 13 00:25:21.269102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:25:21.270828 kernel: loop3: p1 p2 p3 Dec 13 00:25:21.274000 audit: BPF prog-id=22 op=LOAD Dec 13 00:25:21.274000 audit: BPF prog-id=23 op=LOAD Dec 13 00:25:21.274000 audit: BPF prog-id=24 op=LOAD Dec 13 00:25:21.276460 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 13 00:25:21.284817 kernel: erofs: (device loop3p1): mounted with root inode @ nid 39. Dec 13 00:25:21.284000 audit: BPF prog-id=25 op=LOAD Dec 13 00:25:21.284000 audit: BPF prog-id=26 op=LOAD Dec 13 00:25:21.284000 audit: BPF prog-id=27 op=LOAD Dec 13 00:25:21.286314 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 00:25:21.323644 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Dec 13 00:25:21.324056 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Dec 13 00:25:21.329574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:25:21.336938 kernel: kauditd_printk_skb: 107 callbacks suppressed Dec 13 00:25:21.337016 kernel: audit: type=1130 audit(1765585521.330:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.337046 kernel: loop4: detected capacity change from 0 to 229808 Dec 13 00:25:21.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.347806 kernel: loop5: detected capacity change from 0 to 171112 Dec 13 00:25:21.352837 kernel: loop5: p1 p2 p3 Dec 13 00:25:21.392999 systemd-nsresourced[1285]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 13 00:25:21.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.395270 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 13 00:25:21.402813 kernel: audit: type=1130 audit(1765585521.396:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.405816 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:21.408889 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:25:21.417885 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:25:21.417947 kernel: audit: type=1130 audit(1765585521.412:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.411715 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 00:25:21.419823 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:25:21.421835 (sd-merge)[1290]: device-mapper: reload ioctl on af67e6a29067aeda0590a0009488436dd8f718bac6be743160aad6f147c2927f-verity (253:1) failed: Invalid argument Dec 13 00:25:21.432819 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:21.478473 systemd-oomd[1282]: No swap; memory pressure usage will be degraded Dec 13 00:25:21.479311 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 13 00:25:21.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.485819 kernel: audit: type=1130 audit(1765585521.480:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.501487 systemd-resolved[1283]: Positive Trust Anchors: Dec 13 00:25:21.501507 systemd-resolved[1283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:25:21.501512 systemd-resolved[1283]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:25:21.501546 systemd-resolved[1283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:25:21.507668 systemd-resolved[1283]: Defaulting to hostname 'linux'. Dec 13 00:25:21.509317 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:25:21.511385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:25:21.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.518992 kernel: audit: type=1130 audit(1765585521.510:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.640931 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 00:25:21.919049 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 00:25:21.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.923761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:25:21.921000 audit: BPF prog-id=8 op=UNLOAD Dec 13 00:25:21.927293 kernel: audit: type=1130 audit(1765585521.920:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:21.927325 kernel: audit: type=1334 audit(1765585521.921:148): prog-id=8 op=UNLOAD Dec 13 00:25:21.927349 kernel: audit: type=1334 audit(1765585521.921:149): prog-id=7 op=UNLOAD Dec 13 00:25:21.921000 audit: BPF prog-id=7 op=UNLOAD Dec 13 00:25:21.922000 audit: BPF prog-id=28 op=LOAD Dec 13 00:25:21.930879 kernel: audit: type=1334 audit(1765585521.922:150): prog-id=28 op=LOAD Dec 13 00:25:21.930928 kernel: audit: type=1334 audit(1765585521.922:151): prog-id=29 op=LOAD Dec 13 00:25:21.922000 audit: BPF prog-id=29 op=LOAD Dec 13 00:25:21.991820 systemd-udevd[1309]: Using default interface naming scheme 'v257'. Dec 13 00:25:22.013874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:25:22.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:22.016000 audit: BPF prog-id=30 op=LOAD Dec 13 00:25:22.020013 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:25:22.144157 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 00:25:22.164855 systemd-networkd[1314]: lo: Link UP Dec 13 00:25:22.165442 systemd-networkd[1314]: lo: Gained carrier Dec 13 00:25:22.171044 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:25:22.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:22.182019 systemd-networkd[1314]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:22.182137 systemd-networkd[1314]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:25:22.182878 systemd-networkd[1314]: eth0: Link UP Dec 13 00:25:22.183159 systemd-networkd[1314]: eth0: Gained carrier Dec 13 00:25:22.183232 systemd-networkd[1314]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:22.187191 systemd[1]: Reached target network.target - Network. Dec 13 00:25:22.194817 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 00:25:22.191012 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 13 00:25:22.196961 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 00:25:22.199637 systemd-networkd[1314]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:25:22.201840 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 00:25:22.207037 kernel: ACPI: button: Power Button [PWRF] Dec 13 00:25:22.218901 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 00:25:22.220250 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 00:25:22.220562 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 13 00:25:22.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:22.290884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:25:22.295941 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 00:25:22.330031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:22.333524 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 00:25:22.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:22.512123 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 13 00:25:22.514841 kernel: loop6: detected capacity change from 0 to 375256 Dec 13 00:25:22.518844 kernel: loop6: p1 p2 p3 Dec 13 00:25:22.537399 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:22.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:22.549191 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:22.549299 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:25:22.553810 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:25:22.553853 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:25:22.555889 (sd-merge)[1290]: device-mapper: reload ioctl on c81b0b335c4f741d8803812340292f37f57a6bdf618683fbcdb11178b8725544-verity (253:2) failed: Invalid argument Dec 13 00:25:22.560229 kernel: kvm_amd: TSC scaling supported Dec 13 00:25:22.560289 kernel: kvm_amd: Nested Virtualization enabled Dec 13 00:25:22.560308 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:22.560329 kernel: kvm_amd: Nested Paging enabled Dec 13 00:25:22.561972 kernel: kvm_amd: LBR virtualization supported Dec 13 00:25:22.562006 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 00:25:22.562930 kernel: kvm_amd: Virtual GIF supported Dec 13 00:25:22.591827 kernel: EDAC MC: Ver: 3.0.0 Dec 13 00:25:22.611844 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 13 00:25:22.613998 (sd-merge)[1290]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 13 00:25:22.618200 (sd-merge)[1290]: Merged extensions into '/usr'. Dec 13 00:25:22.623713 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 00:25:22.623756 systemd[1]: Reloading... Dec 13 00:25:22.685909 zram_generator::config[1409]: No configuration found. Dec 13 00:25:22.956471 systemd[1]: Reloading finished in 331 ms. Dec 13 00:25:22.984477 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 00:25:22.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.019664 systemd[1]: Starting ensure-sysext.service... Dec 13 00:25:23.022227 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:25:23.024000 audit: BPF prog-id=31 op=LOAD Dec 13 00:25:23.024000 audit: BPF prog-id=18 op=UNLOAD Dec 13 00:25:23.024000 audit: BPF prog-id=32 op=LOAD Dec 13 00:25:23.024000 audit: BPF prog-id=33 op=LOAD Dec 13 00:25:23.024000 audit: BPF prog-id=19 op=UNLOAD Dec 13 00:25:23.024000 audit: BPF prog-id=20 op=UNLOAD Dec 13 00:25:23.026000 audit: BPF prog-id=34 op=LOAD Dec 13 00:25:23.026000 audit: BPF prog-id=22 op=UNLOAD Dec 13 00:25:23.026000 audit: BPF prog-id=35 op=LOAD Dec 13 00:25:23.026000 audit: BPF prog-id=36 op=LOAD Dec 13 00:25:23.026000 audit: BPF prog-id=23 op=UNLOAD Dec 13 00:25:23.026000 audit: BPF prog-id=24 op=UNLOAD Dec 13 00:25:23.027000 audit: BPF prog-id=37 op=LOAD Dec 13 00:25:23.027000 audit: BPF prog-id=15 op=UNLOAD Dec 13 00:25:23.027000 audit: BPF prog-id=38 op=LOAD Dec 13 00:25:23.027000 audit: BPF prog-id=39 op=LOAD Dec 13 00:25:23.027000 audit: BPF prog-id=16 op=UNLOAD Dec 13 00:25:23.027000 audit: BPF prog-id=17 op=UNLOAD Dec 13 00:25:23.028000 audit: BPF prog-id=40 op=LOAD Dec 13 00:25:23.028000 audit: BPF prog-id=41 op=LOAD Dec 13 00:25:23.028000 audit: BPF prog-id=28 op=UNLOAD Dec 13 00:25:23.028000 audit: BPF prog-id=29 op=UNLOAD Dec 13 00:25:23.030000 audit: BPF prog-id=42 op=LOAD Dec 13 00:25:23.030000 audit: BPF prog-id=21 op=UNLOAD Dec 13 00:25:23.031000 audit: BPF prog-id=43 op=LOAD Dec 13 00:25:23.031000 audit: BPF prog-id=30 op=UNLOAD Dec 13 00:25:23.032000 audit: BPF prog-id=44 op=LOAD Dec 13 00:25:23.032000 audit: BPF prog-id=25 op=UNLOAD Dec 13 00:25:23.032000 audit: BPF prog-id=45 op=LOAD Dec 13 00:25:23.032000 audit: BPF prog-id=46 op=LOAD Dec 13 00:25:23.032000 audit: BPF prog-id=26 op=UNLOAD Dec 13 00:25:23.032000 audit: BPF prog-id=27 op=UNLOAD Dec 13 00:25:23.042902 systemd[1]: Reload requested from client PID 1445 ('systemctl') (unit ensure-sysext.service)... Dec 13 00:25:23.042924 systemd[1]: Reloading... Dec 13 00:25:23.046467 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 13 00:25:23.046525 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 13 00:25:23.046983 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 00:25:23.049036 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Dec 13 00:25:23.049135 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Dec 13 00:25:23.056635 systemd-tmpfiles[1446]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:25:23.056659 systemd-tmpfiles[1446]: Skipping /boot Dec 13 00:25:23.069897 systemd-tmpfiles[1446]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:25:23.069912 systemd-tmpfiles[1446]: Skipping /boot Dec 13 00:25:23.113890 zram_generator::config[1480]: No configuration found. Dec 13 00:25:23.360022 systemd[1]: Reloading finished in 316 ms. Dec 13 00:25:23.384146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:25:23.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.388000 audit: BPF prog-id=47 op=LOAD Dec 13 00:25:23.388000 audit: BPF prog-id=48 op=LOAD Dec 13 00:25:23.388000 audit: BPF prog-id=40 op=UNLOAD Dec 13 00:25:23.388000 audit: BPF prog-id=41 op=UNLOAD Dec 13 00:25:23.389000 audit: BPF prog-id=49 op=LOAD Dec 13 00:25:23.389000 audit: BPF prog-id=42 op=UNLOAD Dec 13 00:25:23.390000 audit: BPF prog-id=50 op=LOAD Dec 13 00:25:23.390000 audit: BPF prog-id=43 op=UNLOAD Dec 13 00:25:23.392000 audit: BPF prog-id=51 op=LOAD Dec 13 00:25:23.392000 audit: BPF prog-id=37 op=UNLOAD Dec 13 00:25:23.392000 audit: BPF prog-id=52 op=LOAD Dec 13 00:25:23.392000 audit: BPF prog-id=53 op=LOAD Dec 13 00:25:23.392000 audit: BPF prog-id=38 op=UNLOAD Dec 13 00:25:23.392000 audit: BPF prog-id=39 op=UNLOAD Dec 13 00:25:23.416000 audit: BPF prog-id=54 op=LOAD Dec 13 00:25:23.416000 audit: BPF prog-id=31 op=UNLOAD Dec 13 00:25:23.416000 audit: BPF prog-id=55 op=LOAD Dec 13 00:25:23.416000 audit: BPF prog-id=56 op=LOAD Dec 13 00:25:23.416000 audit: BPF prog-id=32 op=UNLOAD Dec 13 00:25:23.416000 audit: BPF prog-id=33 op=UNLOAD Dec 13 00:25:23.418000 audit: BPF prog-id=57 op=LOAD Dec 13 00:25:23.418000 audit: BPF prog-id=34 op=UNLOAD Dec 13 00:25:23.418000 audit: BPF prog-id=58 op=LOAD Dec 13 00:25:23.418000 audit: BPF prog-id=59 op=LOAD Dec 13 00:25:23.418000 audit: BPF prog-id=35 op=UNLOAD Dec 13 00:25:23.418000 audit: BPF prog-id=36 op=UNLOAD Dec 13 00:25:23.419000 audit: BPF prog-id=60 op=LOAD Dec 13 00:25:23.419000 audit: BPF prog-id=44 op=UNLOAD Dec 13 00:25:23.419000 audit: BPF prog-id=61 op=LOAD Dec 13 00:25:23.419000 audit: BPF prog-id=62 op=LOAD Dec 13 00:25:23.419000 audit: BPF prog-id=45 op=UNLOAD Dec 13 00:25:23.419000 audit: BPF prog-id=46 op=UNLOAD Dec 13 00:25:23.430730 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:25:23.433916 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 00:25:23.455431 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 00:25:23.459776 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 00:25:23.463842 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 00:25:23.469174 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:23.470048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:23.471487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:23.476904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:23.486291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:23.489981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:23.490213 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:23.490312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:23.490407 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:23.496020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:23.496198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:23.496421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:23.496594 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:23.496696 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:23.496000 audit[1527]: SYSTEM_BOOT pid=1527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.497568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:23.498815 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 00:25:23.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.501669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:23.501982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:23.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.505525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:23.506122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:23.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.509206 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:23.509451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:23.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:23.523552 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:23.524281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:23.527260 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:23.530524 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:25:23.532000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 00:25:23.532000 audit[1551]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffedc6e7ca0 a2=420 a3=0 items=0 ppid=1518 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:23.532000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:23.534393 augenrules[1551]: No rules Dec 13 00:25:23.534799 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:23.545673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:23.548020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:23.548493 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:23.548595 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:23.548727 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:23.550548 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:25:23.551930 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:25:23.554893 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 00:25:23.558010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:23.558472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:23.562141 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:25:23.562539 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:25:23.564919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:23.565269 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:23.567851 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:23.568087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:23.576533 systemd[1]: Finished ensure-sysext.service. Dec 13 00:25:23.585917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:25:23.585997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:25:23.588052 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 00:25:23.591581 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 00:25:23.594285 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 00:25:23.680335 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 00:25:24.285588 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 00:25:24.285657 systemd-timesyncd[1565]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 00:25:24.285718 systemd-timesyncd[1565]: Initial clock synchronization to Sat 2025-12-13 00:25:24.285563 UTC. Dec 13 00:25:24.288420 systemd-resolved[1283]: Clock change detected. Flushing caches. Dec 13 00:25:24.666070 systemd-networkd[1314]: eth0: Gained IPv6LL Dec 13 00:25:24.669473 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 00:25:24.671832 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 00:25:24.715126 ldconfig[1520]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 00:25:24.721251 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 00:25:24.724683 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 00:25:24.759920 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 00:25:24.762058 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:25:24.763894 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 00:25:24.765919 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 00:25:24.768013 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 13 00:25:24.770191 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 00:25:24.772047 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 00:25:24.774180 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 13 00:25:24.776484 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 13 00:25:24.778299 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 00:25:24.780415 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 00:25:24.780457 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:25:24.782008 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:25:24.784985 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 00:25:24.788744 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 00:25:24.792787 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 13 00:25:24.795194 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 13 00:25:24.797560 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 13 00:25:24.803062 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 00:25:24.805051 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 13 00:25:24.807882 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 00:25:24.810473 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:25:24.812064 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:25:24.813697 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:25:24.813725 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:25:24.815074 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 00:25:24.817815 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 00:25:24.820559 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 00:25:24.823397 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 00:25:24.826498 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 00:25:24.829703 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 00:25:24.831394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 00:25:24.832735 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 13 00:25:24.836528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:24.840628 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 00:25:24.844559 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 00:25:24.848513 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 00:25:24.855679 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 00:25:24.859422 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 00:25:24.867652 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 00:25:24.869370 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 00:25:24.870016 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 00:25:24.870876 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 00:25:24.875345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 00:25:24.903315 jq[1591]: true Dec 13 00:25:24.909462 jq[1580]: false Dec 13 00:25:24.910937 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 00:25:24.919894 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 00:25:24.922454 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 00:25:24.926800 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 00:25:24.927121 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 00:25:24.930275 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing passwd entry cache Dec 13 00:25:24.930685 update_engine[1590]: I20251213 00:25:24.930583 1590 main.cc:92] Flatcar Update Engine starting Dec 13 00:25:24.931541 oslogin_cache_refresh[1582]: Refreshing passwd entry cache Dec 13 00:25:24.931969 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 00:25:24.933839 extend-filesystems[1581]: Found /dev/vda6 Dec 13 00:25:24.939704 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 00:25:24.949569 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting users, quitting Dec 13 00:25:24.949620 oslogin_cache_refresh[1582]: Failure getting users, quitting Dec 13 00:25:24.949686 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:25:24.949736 oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:25:24.950457 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing group entry cache Dec 13 00:25:24.950509 oslogin_cache_refresh[1582]: Refreshing group entry cache Dec 13 00:25:24.957111 extend-filesystems[1581]: Found /dev/vda9 Dec 13 00:25:24.964047 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting groups, quitting Dec 13 00:25:24.964047 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:25:24.964248 jq[1622]: true Dec 13 00:25:24.960012 oslogin_cache_refresh[1582]: Failure getting groups, quitting Dec 13 00:25:24.960449 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 00:25:24.960022 oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:25:24.966973 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 00:25:24.967321 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 00:25:24.969892 extend-filesystems[1581]: Checking size of /dev/vda9 Dec 13 00:25:24.973488 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 13 00:25:24.973883 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 13 00:25:24.980969 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 00:25:24.983775 tar[1611]: linux-amd64/LICENSE Dec 13 00:25:24.984269 tar[1611]: linux-amd64/helm Dec 13 00:25:24.995546 dbus-daemon[1578]: [system] SELinux support is enabled Dec 13 00:25:24.996688 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 00:25:25.009937 update_engine[1590]: I20251213 00:25:25.000701 1590 update_check_scheduler.cc:74] Next update check in 9m8s Dec 13 00:25:25.005828 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 00:25:25.005858 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 00:25:25.010676 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 00:25:25.010793 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 00:25:25.013550 systemd[1]: Started update-engine.service - Update Engine. Dec 13 00:25:25.041923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 00:25:25.046504 systemd-logind[1589]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 00:25:25.046538 systemd-logind[1589]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 00:25:25.049691 systemd-logind[1589]: New seat seat0. Dec 13 00:25:25.052396 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 00:25:25.102858 extend-filesystems[1581]: Resized partition /dev/vda9 Dec 13 00:25:25.152218 extend-filesystems[1666]: resize2fs 1.47.3 (8-Jul-2025) Dec 13 00:25:25.178929 bash[1657]: Updated "/home/core/.ssh/authorized_keys" Dec 13 00:25:25.185400 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 13 00:25:25.189826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 00:25:25.194069 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 00:25:25.226271 sshd_keygen[1601]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 00:25:25.261479 locksmithd[1650]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 00:25:25.327042 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 00:25:25.330670 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 00:25:25.355106 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 00:25:25.355463 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 00:25:25.359046 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 00:25:25.389449 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 13 00:25:25.396144 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 00:25:25.401118 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 00:25:25.404232 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 00:25:25.406309 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 00:25:25.450962 extend-filesystems[1666]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 00:25:25.450962 extend-filesystems[1666]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 00:25:25.450962 extend-filesystems[1666]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 13 00:25:25.457974 extend-filesystems[1581]: Resized filesystem in /dev/vda9 Dec 13 00:25:25.460266 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 00:25:25.461769 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 00:25:25.775333 tar[1611]: linux-amd64/README.md Dec 13 00:25:25.803261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 00:25:25.803653 containerd[1623]: time="2025-12-13T00:25:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 13 00:25:25.804974 containerd[1623]: time="2025-12-13T00:25:25.804847869Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 13 00:25:25.843898 containerd[1623]: time="2025-12-13T00:25:25.843801681Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="26.419µs" Dec 13 00:25:25.843898 containerd[1623]: time="2025-12-13T00:25:25.843866252Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 13 00:25:25.844058 containerd[1623]: time="2025-12-13T00:25:25.843924131Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 13 00:25:25.844058 containerd[1623]: time="2025-12-13T00:25:25.843937876Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 13 00:25:25.844210 containerd[1623]: time="2025-12-13T00:25:25.844162047Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 13 00:25:25.844210 containerd[1623]: time="2025-12-13T00:25:25.844182204Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:25:25.844305 containerd[1623]: time="2025-12-13T00:25:25.844275940Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:25:25.844305 containerd[1623]: time="2025-12-13T00:25:25.844290057Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:25:25.844661 containerd[1623]: time="2025-12-13T00:25:25.844619174Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:25:25.844661 containerd[1623]: time="2025-12-13T00:25:25.844637639Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:25:25.844661 containerd[1623]: time="2025-12-13T00:25:25.844647878Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:25:25.844837 containerd[1623]: time="2025-12-13T00:25:25.844668296Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 13 00:25:25.844988 containerd[1623]: time="2025-12-13T00:25:25.844949774Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 13 00:25:25.845096 containerd[1623]: time="2025-12-13T00:25:25.845062395Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 13 00:25:25.845365 containerd[1623]: time="2025-12-13T00:25:25.845327062Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:25:25.845417 containerd[1623]: time="2025-12-13T00:25:25.845397974Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:25:25.845417 containerd[1623]: time="2025-12-13T00:25:25.845409706Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 13 00:25:25.845470 containerd[1623]: time="2025-12-13T00:25:25.845427820Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 13 00:25:25.845817 containerd[1623]: time="2025-12-13T00:25:25.845781884Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 13 00:25:25.845880 containerd[1623]: time="2025-12-13T00:25:25.845858127Z" level=info msg="metadata content store policy set" policy=shared Dec 13 00:25:25.855033 containerd[1623]: time="2025-12-13T00:25:25.854975576Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 13 00:25:25.855152 containerd[1623]: time="2025-12-13T00:25:25.855060375Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:25:25.855196 containerd[1623]: time="2025-12-13T00:25:25.855165622Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:25:25.855196 containerd[1623]: time="2025-12-13T00:25:25.855192893Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 13 00:25:25.855254 containerd[1623]: time="2025-12-13T00:25:25.855235924Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 13 00:25:25.855254 containerd[1623]: time="2025-12-13T00:25:25.855249159Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 13 00:25:25.855293 containerd[1623]: time="2025-12-13T00:25:25.855268936Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 13 00:25:25.855293 containerd[1623]: time="2025-12-13T00:25:25.855279075Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 13 00:25:25.855293 containerd[1623]: time="2025-12-13T00:25:25.855290617Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 13 00:25:25.855346 containerd[1623]: time="2025-12-13T00:25:25.855308250Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 13 00:25:25.855346 containerd[1623]: time="2025-12-13T00:25:25.855328688Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 13 00:25:25.855346 containerd[1623]: time="2025-12-13T00:25:25.855339388Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 13 00:25:25.855416 containerd[1623]: time="2025-12-13T00:25:25.855348626Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 13 00:25:25.855416 containerd[1623]: time="2025-12-13T00:25:25.855360778Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 13 00:25:25.855551 containerd[1623]: time="2025-12-13T00:25:25.855524185Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 13 00:25:25.855575 containerd[1623]: time="2025-12-13T00:25:25.855551115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 13 00:25:25.855575 containerd[1623]: time="2025-12-13T00:25:25.855564050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 13 00:25:25.855617 containerd[1623]: time="2025-12-13T00:25:25.855576844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 13 00:25:25.855617 containerd[1623]: time="2025-12-13T00:25:25.855588806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 13 00:25:25.855617 containerd[1623]: time="2025-12-13T00:25:25.855598003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 13 00:25:25.855617 containerd[1623]: time="2025-12-13T00:25:25.855608022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 13 00:25:25.855696 containerd[1623]: time="2025-12-13T00:25:25.855633840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 13 00:25:25.855696 containerd[1623]: time="2025-12-13T00:25:25.855648027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 13 00:25:25.855696 containerd[1623]: time="2025-12-13T00:25:25.855659038Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 13 00:25:25.855696 containerd[1623]: time="2025-12-13T00:25:25.855671050Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 13 00:25:25.855872 containerd[1623]: time="2025-12-13T00:25:25.855846439Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 13 00:25:25.856013 containerd[1623]: time="2025-12-13T00:25:25.855995388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 13 00:25:25.856043 containerd[1623]: time="2025-12-13T00:25:25.856015456Z" level=info msg="Start snapshots syncer" Dec 13 00:25:25.856592 containerd[1623]: time="2025-12-13T00:25:25.856408403Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 13 00:25:25.856798 containerd[1623]: time="2025-12-13T00:25:25.856750865Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 13 00:25:25.856964 containerd[1623]: time="2025-12-13T00:25:25.856819063Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 13 00:25:25.857336 containerd[1623]: time="2025-12-13T00:25:25.857264258Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 13 00:25:25.857411 containerd[1623]: time="2025-12-13T00:25:25.857392849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 13 00:25:25.857445 containerd[1623]: time="2025-12-13T00:25:25.857415091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 13 00:25:25.857445 containerd[1623]: time="2025-12-13T00:25:25.857425931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 13 00:25:25.857445 containerd[1623]: time="2025-12-13T00:25:25.857435319Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 13 00:25:25.857500 containerd[1623]: time="2025-12-13T00:25:25.857454334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 13 00:25:25.857520 containerd[1623]: time="2025-12-13T00:25:25.857511021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 13 00:25:25.857540 containerd[1623]: time="2025-12-13T00:25:25.857526380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 13 00:25:25.857565 containerd[1623]: time="2025-12-13T00:25:25.857542730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 13 00:25:25.857565 containerd[1623]: time="2025-12-13T00:25:25.857555334Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 13 00:25:25.857932 containerd[1623]: time="2025-12-13T00:25:25.857905971Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:25:25.857932 containerd[1623]: time="2025-12-13T00:25:25.857924817Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:25:25.857932 containerd[1623]: time="2025-12-13T00:25:25.857933273Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:25:25.857995 containerd[1623]: time="2025-12-13T00:25:25.857943392Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:25:25.857995 containerd[1623]: time="2025-12-13T00:25:25.857951727Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 13 00:25:25.857995 containerd[1623]: time="2025-12-13T00:25:25.857962187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 13 00:25:25.857995 containerd[1623]: time="2025-12-13T00:25:25.857976333Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 13 00:25:25.858071 containerd[1623]: time="2025-12-13T00:25:25.858001571Z" level=info msg="runtime interface created" Dec 13 00:25:25.858071 containerd[1623]: time="2025-12-13T00:25:25.858007893Z" level=info msg="created NRI interface" Dec 13 00:25:25.858071 containerd[1623]: time="2025-12-13T00:25:25.858028682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 13 00:25:25.858071 containerd[1623]: time="2025-12-13T00:25:25.858040945Z" level=info msg="Connect containerd service" Dec 13 00:25:25.858071 containerd[1623]: time="2025-12-13T00:25:25.858065050Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 00:25:25.861011 containerd[1623]: time="2025-12-13T00:25:25.860967022Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 00:25:26.121887 containerd[1623]: time="2025-12-13T00:25:26.121648974Z" level=info msg="Start subscribing containerd event" Dec 13 00:25:26.121887 containerd[1623]: time="2025-12-13T00:25:26.121769890Z" level=info msg="Start recovering state" Dec 13 00:25:26.122450 containerd[1623]: time="2025-12-13T00:25:26.122409700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 00:25:26.122563 containerd[1623]: time="2025-12-13T00:25:26.122496173Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 00:25:26.122930 containerd[1623]: time="2025-12-13T00:25:26.122637177Z" level=info msg="Start event monitor" Dec 13 00:25:26.122930 containerd[1623]: time="2025-12-13T00:25:26.122677132Z" level=info msg="Start cni network conf syncer for default" Dec 13 00:25:26.122930 containerd[1623]: time="2025-12-13T00:25:26.122686289Z" level=info msg="Start streaming server" Dec 13 00:25:26.122930 containerd[1623]: time="2025-12-13T00:25:26.122702770Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 13 00:25:26.122930 containerd[1623]: time="2025-12-13T00:25:26.122711146Z" level=info msg="runtime interface starting up..." Dec 13 00:25:26.122930 containerd[1623]: time="2025-12-13T00:25:26.122723499Z" level=info msg="starting plugins..." Dec 13 00:25:26.122930 containerd[1623]: time="2025-12-13T00:25:26.122743086Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 13 00:25:26.124998 containerd[1623]: time="2025-12-13T00:25:26.123994773Z" level=info msg="containerd successfully booted in 0.320699s" Dec 13 00:25:26.124254 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 00:25:26.881440 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 00:25:26.884752 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:58934.service - OpenSSH per-connection server daemon (10.0.0.1:58934). Dec 13 00:25:27.113580 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 58934 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:25:27.115978 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:27.122946 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 00:25:27.141780 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 00:25:27.148527 systemd-logind[1589]: New session 1 of user core. Dec 13 00:25:27.212029 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 00:25:27.221236 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 00:25:27.246577 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:27.249498 systemd-logind[1589]: New session 2 of user core. Dec 13 00:25:27.255298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:27.257840 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 00:25:27.332848 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:25:27.497647 systemd[1719]: Queued start job for default target default.target. Dec 13 00:25:27.506603 systemd[1719]: Created slice app.slice - User Application Slice. Dec 13 00:25:27.506644 systemd[1719]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 13 00:25:27.506662 systemd[1719]: Reached target paths.target - Paths. Dec 13 00:25:27.506729 systemd[1719]: Reached target timers.target - Timers. Dec 13 00:25:27.508540 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 00:25:27.509592 systemd[1719]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 13 00:25:27.565755 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 00:25:27.565867 systemd[1719]: Reached target sockets.target - Sockets. Dec 13 00:25:27.570928 systemd[1719]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 13 00:25:27.571060 systemd[1719]: Reached target basic.target - Basic System. Dec 13 00:25:27.571123 systemd[1719]: Reached target default.target - Main User Target. Dec 13 00:25:27.571170 systemd[1719]: Startup finished in 274ms. Dec 13 00:25:27.571459 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 00:25:27.579645 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 00:25:27.582227 systemd[1]: Startup finished in 2.961s (kernel) + 6.163s (initrd) + 7.069s (userspace) = 16.193s. Dec 13 00:25:27.608713 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:58948.service - OpenSSH per-connection server daemon (10.0.0.1:58948). Dec 13 00:25:27.744878 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 58948 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:25:27.747608 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:27.754261 systemd-logind[1589]: New session 3 of user core. Dec 13 00:25:27.789608 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 00:25:27.806992 sshd[1750]: Connection closed by 10.0.0.1 port 58948 Dec 13 00:25:27.808305 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:27.817296 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:58948.service: Deactivated successfully. Dec 13 00:25:27.819468 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 00:25:27.820344 systemd-logind[1589]: Session 3 logged out. Waiting for processes to exit. Dec 13 00:25:27.823910 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:58962.service - OpenSSH per-connection server daemon (10.0.0.1:58962). Dec 13 00:25:27.824870 systemd-logind[1589]: Removed session 3. Dec 13 00:25:27.933750 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 58962 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:25:27.937859 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:27.944855 systemd-logind[1589]: New session 4 of user core. Dec 13 00:25:27.960726 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 00:25:27.972921 sshd[1762]: Connection closed by 10.0.0.1 port 58962 Dec 13 00:25:27.973541 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:27.982664 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:58962.service: Deactivated successfully. Dec 13 00:25:27.985401 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 00:25:27.987071 systemd-logind[1589]: Session 4 logged out. Waiting for processes to exit. Dec 13 00:25:28.020688 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:58966.service - OpenSSH per-connection server daemon (10.0.0.1:58966). Dec 13 00:25:28.021800 systemd-logind[1589]: Removed session 4. Dec 13 00:25:28.076612 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 58966 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:25:28.079269 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:28.084828 systemd-logind[1589]: New session 5 of user core. Dec 13 00:25:28.105598 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 00:25:28.119314 sshd[1774]: Connection closed by 10.0.0.1 port 58966 Dec 13 00:25:28.120003 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:28.129852 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:58966.service: Deactivated successfully. Dec 13 00:25:28.131504 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 00:25:28.132282 systemd-logind[1589]: Session 5 logged out. Waiting for processes to exit. Dec 13 00:25:28.134958 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:58980.service - OpenSSH per-connection server daemon (10.0.0.1:58980). Dec 13 00:25:28.135558 systemd-logind[1589]: Removed session 5. Dec 13 00:25:28.236848 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 58980 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:25:28.238827 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:28.243328 systemd-logind[1589]: New session 6 of user core. Dec 13 00:25:28.249350 kubelet[1727]: E1213 00:25:28.249290 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:25:28.253568 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 00:25:28.253894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:25:28.254072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:25:28.254452 systemd[1]: kubelet.service: Consumed 2.504s CPU time, 266.6M memory peak. Dec 13 00:25:28.436825 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 00:25:28.437424 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:25:28.459174 sudo[1787]: pam_unix(sudo:session): session closed for user root Dec 13 00:25:28.461258 sshd[1786]: Connection closed by 10.0.0.1 port 58980 Dec 13 00:25:28.461758 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:28.474467 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:58980.service: Deactivated successfully. Dec 13 00:25:28.476451 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 00:25:28.477301 systemd-logind[1589]: Session 6 logged out. Waiting for processes to exit. Dec 13 00:25:28.479991 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:58990.service - OpenSSH per-connection server daemon (10.0.0.1:58990). Dec 13 00:25:28.480876 systemd-logind[1589]: Removed session 6. Dec 13 00:25:28.544398 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 58990 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:25:28.546684 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:28.551170 systemd-logind[1589]: New session 7 of user core. Dec 13 00:25:28.564670 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 00:25:28.579663 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 00:25:28.580008 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:25:28.585844 sudo[1801]: pam_unix(sudo:session): session closed for user root Dec 13 00:25:28.593362 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 00:25:28.593755 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:25:28.604567 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:25:28.652000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:25:28.653618 augenrules[1825]: No rules Dec 13 00:25:28.654463 kernel: kauditd_printk_skb: 83 callbacks suppressed Dec 13 00:25:28.654502 kernel: audit: type=1305 audit(1765585528.652:233): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:25:28.655437 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:25:28.655784 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:25:28.652000 audit[1825]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfd191460 a2=420 a3=0 items=0 ppid=1806 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:28.657018 sudo[1800]: pam_unix(sudo:session): session closed for user root Dec 13 00:25:28.658513 sshd[1799]: Connection closed by 10.0.0.1 port 58990 Dec 13 00:25:28.658853 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:28.662588 kernel: audit: type=1300 audit(1765585528.652:233): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfd191460 a2=420 a3=0 items=0 ppid=1806 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:28.662635 kernel: audit: type=1327 audit(1765585528.652:233): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:28.652000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:28.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.668697 kernel: audit: type=1130 audit(1765585528.653:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.668726 kernel: audit: type=1131 audit(1765585528.653:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.656000 audit[1800]: USER_END pid=1800 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.676560 kernel: audit: type=1106 audit(1765585528.656:236): pid=1800 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.676603 kernel: audit: type=1104 audit(1765585528.656:237): pid=1800 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.656000 audit[1800]: CRED_DISP pid=1800 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.680387 kernel: audit: type=1106 audit(1765585528.656:238): pid=1794 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:28.656000 audit[1794]: USER_END pid=1794 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:28.685023 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:58990.service: Deactivated successfully. Dec 13 00:25:28.656000 audit[1794]: CRED_DISP pid=1794 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:28.687042 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 00:25:28.689654 systemd-logind[1589]: Session 7 logged out. Waiting for processes to exit. Dec 13 00:25:28.690714 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:59000.service - OpenSSH per-connection server daemon (10.0.0.1:59000). Dec 13 00:25:28.691039 kernel: audit: type=1104 audit(1765585528.656:239): pid=1794 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:28.691072 kernel: audit: type=1131 audit(1765585528.679:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.117:22-10.0.0.1:58990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.117:22-10.0.0.1:58990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.691984 systemd-logind[1589]: Removed session 7. Dec 13 00:25:28.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.117:22-10.0.0.1:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.741000 audit[1834]: USER_ACCT pid=1834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:28.743397 sshd[1834]: Accepted publickey for core from 10.0.0.1 port 59000 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:25:28.743000 audit[1834]: CRED_ACQ pid=1834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:28.743000 audit[1834]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc61b956a0 a2=3 a3=0 items=0 ppid=1 pid=1834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:28.743000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:25:28.745101 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:28.750082 systemd-logind[1589]: New session 8 of user core. Dec 13 00:25:28.768570 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 00:25:28.769000 audit[1834]: USER_START pid=1834 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:28.770000 audit[1838]: CRED_ACQ pid=1838 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:28.780000 audit[1839]: USER_ACCT pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.782465 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 00:25:28.781000 audit[1839]: CRED_REFR pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:28.782827 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:25:28.781000 audit[1839]: USER_START pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:29.374071 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 00:25:29.397685 (dockerd)[1862]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 00:25:29.788853 dockerd[1862]: time="2025-12-13T00:25:29.788713368Z" level=info msg="Starting up" Dec 13 00:25:29.792651 dockerd[1862]: time="2025-12-13T00:25:29.792623261Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 13 00:25:29.809847 dockerd[1862]: time="2025-12-13T00:25:29.809788642Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 13 00:25:30.492694 dockerd[1862]: time="2025-12-13T00:25:30.492631907Z" level=info msg="Loading containers: start." Dec 13 00:25:30.508419 kernel: Initializing XFRM netlink socket Dec 13 00:25:30.585000 audit[1916]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.585000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe498fd960 a2=0 a3=0 items=0 ppid=1862 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 00:25:30.588000 audit[1918]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.588000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdde564e60 a2=0 a3=0 items=0 ppid=1862 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 00:25:30.590000 audit[1920]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.590000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4079df20 a2=0 a3=0 items=0 ppid=1862 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 00:25:30.592000 audit[1922]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.592000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecb24a8f0 a2=0 a3=0 items=0 ppid=1862 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 00:25:30.595000 audit[1924]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.595000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe0e6b3590 a2=0 a3=0 items=0 ppid=1862 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.595000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 00:25:30.598000 audit[1926]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.598000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd4ce22420 a2=0 a3=0 items=0 ppid=1862 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:25:30.601000 audit[1928]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.601000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe28d86720 a2=0 a3=0 items=0 ppid=1862 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:25:30.604000 audit[1930]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.604000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffed85a0b00 a2=0 a3=0 items=0 ppid=1862 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.604000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 00:25:30.638000 audit[1933]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.638000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd2ab42b90 a2=0 a3=0 items=0 ppid=1862 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 00:25:30.641000 audit[1935]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.641000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe55f45600 a2=0 a3=0 items=0 ppid=1862 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.641000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 00:25:30.644000 audit[1937]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.644000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffea2fb62c0 a2=0 a3=0 items=0 ppid=1862 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 00:25:30.647000 audit[1939]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.647000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe33b38f80 a2=0 a3=0 items=0 ppid=1862 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:25:30.650000 audit[1941]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.650000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffda55f45c0 a2=0 a3=0 items=0 ppid=1862 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.650000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 00:25:30.699000 audit[1971]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.699000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffd9f07be0 a2=0 a3=0 items=0 ppid=1862 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 00:25:30.702000 audit[1973]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.702000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe54685e80 a2=0 a3=0 items=0 ppid=1862 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 00:25:30.704000 audit[1975]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.704000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8b962b90 a2=0 a3=0 items=0 ppid=1862 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 00:25:30.707000 audit[1977]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.707000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdba06b4b0 a2=0 a3=0 items=0 ppid=1862 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 00:25:30.709000 audit[1979]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.709000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe614f0b70 a2=0 a3=0 items=0 ppid=1862 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 00:25:30.711000 audit[1981]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.711000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd9edd9640 a2=0 a3=0 items=0 ppid=1862 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:25:30.713000 audit[1983]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.713000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff47a26830 a2=0 a3=0 items=0 ppid=1862 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.713000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:25:30.716000 audit[1985]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.716000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc4ecb6cc0 a2=0 a3=0 items=0 ppid=1862 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 00:25:30.719000 audit[1987]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.719000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fffe71eb8e0 a2=0 a3=0 items=0 ppid=1862 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 13 00:25:30.721000 audit[1989]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.721000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd8fc07270 a2=0 a3=0 items=0 ppid=1862 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 00:25:30.723000 audit[1991]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.723000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffda72d8130 a2=0 a3=0 items=0 ppid=1862 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 00:25:30.726000 audit[1993]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.726000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe0101e800 a2=0 a3=0 items=0 ppid=1862 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:25:30.728000 audit[1995]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.728000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff7d052460 a2=0 a3=0 items=0 ppid=1862 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.728000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 00:25:30.736000 audit[2000]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.736000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0105f0b0 a2=0 a3=0 items=0 ppid=1862 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 00:25:30.739000 audit[2002]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.739000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffda8b73020 a2=0 a3=0 items=0 ppid=1862 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 00:25:30.742000 audit[2004]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.742000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc3b554610 a2=0 a3=0 items=0 ppid=1862 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 00:25:30.744000 audit[2006]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.744000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf6b279c0 a2=0 a3=0 items=0 ppid=1862 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 00:25:30.747000 audit[2008]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.747000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc508817d0 a2=0 a3=0 items=0 ppid=1862 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.747000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 00:25:30.749000 audit[2010]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:30.749000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff68e9acc0 a2=0 a3=0 items=0 ppid=1862 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.749000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 00:25:30.768000 audit[2014]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.768000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc794b4930 a2=0 a3=0 items=0 ppid=1862 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 00:25:30.771000 audit[2016]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.771000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd48748300 a2=0 a3=0 items=0 ppid=1862 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.771000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 00:25:30.782000 audit[2024]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.782000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffce4c71e50 a2=0 a3=0 items=0 ppid=1862 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.782000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 13 00:25:30.793000 audit[2030]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.793000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc75ea1790 a2=0 a3=0 items=0 ppid=1862 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 13 00:25:30.796000 audit[2032]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.796000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffbbf1deb0 a2=0 a3=0 items=0 ppid=1862 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.796000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 00:25:30.799000 audit[2034]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.799000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcde7c3a90 a2=0 a3=0 items=0 ppid=1862 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 13 00:25:30.801000 audit[2036]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.801000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff7ac2da10 a2=0 a3=0 items=0 ppid=1862 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:25:30.804000 audit[2038]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:30.804000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe2e33d330 a2=0 a3=0 items=0 ppid=1862 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:30.804000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 00:25:30.806251 systemd-networkd[1314]: docker0: Link UP Dec 13 00:25:30.812641 dockerd[1862]: time="2025-12-13T00:25:30.812605572Z" level=info msg="Loading containers: done." Dec 13 00:25:30.834157 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1870648974-merged.mount: Deactivated successfully. Dec 13 00:25:30.841230 dockerd[1862]: time="2025-12-13T00:25:30.841177414Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 00:25:30.841407 dockerd[1862]: time="2025-12-13T00:25:30.841289304Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 13 00:25:30.841439 dockerd[1862]: time="2025-12-13T00:25:30.841416032Z" level=info msg="Initializing buildkit" Dec 13 00:25:30.886101 dockerd[1862]: time="2025-12-13T00:25:30.886036281Z" level=info msg="Completed buildkit initialization" Dec 13 00:25:30.892993 dockerd[1862]: time="2025-12-13T00:25:30.892936431Z" level=info msg="Daemon has completed initialization" Dec 13 00:25:30.893224 dockerd[1862]: time="2025-12-13T00:25:30.893155502Z" level=info msg="API listen on /run/docker.sock" Dec 13 00:25:30.893502 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 00:25:30.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:31.582072 containerd[1623]: time="2025-12-13T00:25:31.581998348Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 13 00:25:32.420406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642938194.mount: Deactivated successfully. Dec 13 00:25:34.617288 containerd[1623]: time="2025-12-13T00:25:34.617211286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:34.618308 containerd[1623]: time="2025-12-13T00:25:34.618281202Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28551603" Dec 13 00:25:34.621035 containerd[1623]: time="2025-12-13T00:25:34.620938936Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:34.624066 containerd[1623]: time="2025-12-13T00:25:34.623999296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:34.625257 containerd[1623]: time="2025-12-13T00:25:34.625206059Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.043137198s" Dec 13 00:25:34.625257 containerd[1623]: time="2025-12-13T00:25:34.625254149Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 13 00:25:34.626070 containerd[1623]: time="2025-12-13T00:25:34.626031276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 13 00:25:36.520193 containerd[1623]: time="2025-12-13T00:25:36.520116592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:36.520990 containerd[1623]: time="2025-12-13T00:25:36.520932352Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 13 00:25:36.522240 containerd[1623]: time="2025-12-13T00:25:36.522202785Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:36.525985 containerd[1623]: time="2025-12-13T00:25:36.525928081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:36.526827 containerd[1623]: time="2025-12-13T00:25:36.526781812Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.900705501s" Dec 13 00:25:36.526827 containerd[1623]: time="2025-12-13T00:25:36.526824903Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 13 00:25:36.527492 containerd[1623]: time="2025-12-13T00:25:36.527428364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 13 00:25:38.426684 containerd[1623]: time="2025-12-13T00:25:38.426611431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:38.427595 containerd[1623]: time="2025-12-13T00:25:38.427526857Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20152717" Dec 13 00:25:38.429245 containerd[1623]: time="2025-12-13T00:25:38.429201979Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:38.432186 containerd[1623]: time="2025-12-13T00:25:38.432133597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:38.433110 containerd[1623]: time="2025-12-13T00:25:38.433079030Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.905615129s" Dec 13 00:25:38.433185 containerd[1623]: time="2025-12-13T00:25:38.433114186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 13 00:25:38.434058 containerd[1623]: time="2025-12-13T00:25:38.434009455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 13 00:25:38.504666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 00:25:38.506610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:38.791142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:38.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:38.792618 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 13 00:25:38.792708 kernel: audit: type=1130 audit(1765585538.789:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:38.797245 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:25:38.839523 kubelet[2157]: E1213 00:25:38.839455 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:25:38.846400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:25:38.846615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:25:38.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:38.847069 systemd[1]: kubelet.service: Consumed 272ms CPU time, 108.9M memory peak. Dec 13 00:25:38.852419 kernel: audit: type=1131 audit(1765585538.845:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:39.967059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496259197.mount: Deactivated successfully. Dec 13 00:25:42.473420 containerd[1623]: time="2025-12-13T00:25:42.473331094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:42.551613 containerd[1623]: time="2025-12-13T00:25:42.551543270Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 13 00:25:42.630294 containerd[1623]: time="2025-12-13T00:25:42.630220658Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:42.676922 containerd[1623]: time="2025-12-13T00:25:42.676853582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:42.677652 containerd[1623]: time="2025-12-13T00:25:42.677238704Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 4.243187881s" Dec 13 00:25:42.677652 containerd[1623]: time="2025-12-13T00:25:42.677291523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 13 00:25:42.677991 containerd[1623]: time="2025-12-13T00:25:42.677880768Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 13 00:25:44.433486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13299900.mount: Deactivated successfully. Dec 13 00:25:45.215086 containerd[1623]: time="2025-12-13T00:25:45.215003078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:45.216465 containerd[1623]: time="2025-12-13T00:25:45.216420306Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20336577" Dec 13 00:25:45.218392 containerd[1623]: time="2025-12-13T00:25:45.218352379Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:45.221760 containerd[1623]: time="2025-12-13T00:25:45.221720045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:45.222792 containerd[1623]: time="2025-12-13T00:25:45.222753533Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.544781443s" Dec 13 00:25:45.222792 containerd[1623]: time="2025-12-13T00:25:45.222786585Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 13 00:25:45.223304 containerd[1623]: time="2025-12-13T00:25:45.223272085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 00:25:45.886560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401175826.mount: Deactivated successfully. Dec 13 00:25:45.894770 containerd[1623]: time="2025-12-13T00:25:45.894700303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:25:45.896136 containerd[1623]: time="2025-12-13T00:25:45.896062708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=501" Dec 13 00:25:45.897774 containerd[1623]: time="2025-12-13T00:25:45.897728251Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:25:45.900311 containerd[1623]: time="2025-12-13T00:25:45.900266502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:25:45.901541 containerd[1623]: time="2025-12-13T00:25:45.901488273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 678.1751ms" Dec 13 00:25:45.901581 containerd[1623]: time="2025-12-13T00:25:45.901550449Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 00:25:45.902143 containerd[1623]: time="2025-12-13T00:25:45.902107324Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 13 00:25:46.656578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505673532.mount: Deactivated successfully. Dec 13 00:25:48.745145 containerd[1623]: time="2025-12-13T00:25:48.745043945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:48.746076 containerd[1623]: time="2025-12-13T00:25:48.746017230Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Dec 13 00:25:48.747520 containerd[1623]: time="2025-12-13T00:25:48.747473261Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:48.750197 containerd[1623]: time="2025-12-13T00:25:48.750149299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:48.751367 containerd[1623]: time="2025-12-13T00:25:48.751315717Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.849178727s" Dec 13 00:25:48.751446 containerd[1623]: time="2025-12-13T00:25:48.751369788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 13 00:25:49.097355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 00:25:49.099510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:49.381628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:49.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.386427 kernel: audit: type=1130 audit(1765585549.380:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:49.412918 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:25:49.561842 kubelet[2319]: E1213 00:25:49.561772 2319 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:25:49.566193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:25:49.566403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:25:49.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:49.566876 systemd[1]: kubelet.service: Consumed 337ms CPU time, 110.6M memory peak. Dec 13 00:25:49.571432 kernel: audit: type=1131 audit(1765585549.565:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:52.426223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:52.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.426612 systemd[1]: kubelet.service: Consumed 337ms CPU time, 110.6M memory peak. Dec 13 00:25:52.429751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:52.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.435560 kernel: audit: type=1130 audit(1765585552.425:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.435616 kernel: audit: type=1131 audit(1765585552.425:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:52.456412 systemd[1]: Reload requested from client PID 2335 ('systemctl') (unit session-8.scope)... Dec 13 00:25:52.456431 systemd[1]: Reloading... Dec 13 00:25:52.560425 zram_generator::config[2381]: No configuration found. Dec 13 00:25:52.947323 systemd[1]: Reloading finished in 490 ms. Dec 13 00:25:52.974000 audit: BPF prog-id=67 op=LOAD Dec 13 00:25:52.974000 audit: BPF prog-id=64 op=UNLOAD Dec 13 00:25:52.978764 kernel: audit: type=1334 audit(1765585552.974:297): prog-id=67 op=LOAD Dec 13 00:25:52.978824 kernel: audit: type=1334 audit(1765585552.974:298): prog-id=64 op=UNLOAD Dec 13 00:25:52.978850 kernel: audit: type=1334 audit(1765585552.974:299): prog-id=68 op=LOAD Dec 13 00:25:52.974000 audit: BPF prog-id=68 op=LOAD Dec 13 00:25:52.980147 kernel: audit: type=1334 audit(1765585552.974:300): prog-id=69 op=LOAD Dec 13 00:25:52.974000 audit: BPF prog-id=69 op=LOAD Dec 13 00:25:52.981554 kernel: audit: type=1334 audit(1765585552.974:301): prog-id=65 op=UNLOAD Dec 13 00:25:52.974000 audit: BPF prog-id=65 op=UNLOAD Dec 13 00:25:52.983016 kernel: audit: type=1334 audit(1765585552.974:302): prog-id=66 op=UNLOAD Dec 13 00:25:52.974000 audit: BPF prog-id=66 op=UNLOAD Dec 13 00:25:52.985000 audit: BPF prog-id=70 op=LOAD Dec 13 00:25:52.985000 audit: BPF prog-id=60 op=UNLOAD Dec 13 00:25:52.985000 audit: BPF prog-id=71 op=LOAD Dec 13 00:25:52.986000 audit: BPF prog-id=72 op=LOAD Dec 13 00:25:52.986000 audit: BPF prog-id=61 op=UNLOAD Dec 13 00:25:52.986000 audit: BPF prog-id=62 op=UNLOAD Dec 13 00:25:52.986000 audit: BPF prog-id=73 op=LOAD Dec 13 00:25:52.986000 audit: BPF prog-id=50 op=UNLOAD Dec 13 00:25:52.987000 audit: BPF prog-id=74 op=LOAD Dec 13 00:25:52.987000 audit: BPF prog-id=63 op=UNLOAD Dec 13 00:25:52.990000 audit: BPF prog-id=75 op=LOAD Dec 13 00:25:52.990000 audit: BPF prog-id=54 op=UNLOAD Dec 13 00:25:52.990000 audit: BPF prog-id=76 op=LOAD Dec 13 00:25:52.990000 audit: BPF prog-id=77 op=LOAD Dec 13 00:25:52.990000 audit: BPF prog-id=55 op=UNLOAD Dec 13 00:25:52.990000 audit: BPF prog-id=56 op=UNLOAD Dec 13 00:25:52.991000 audit: BPF prog-id=78 op=LOAD Dec 13 00:25:52.991000 audit: BPF prog-id=51 op=UNLOAD Dec 13 00:25:52.991000 audit: BPF prog-id=79 op=LOAD Dec 13 00:25:52.991000 audit: BPF prog-id=80 op=LOAD Dec 13 00:25:52.991000 audit: BPF prog-id=52 op=UNLOAD Dec 13 00:25:52.991000 audit: BPF prog-id=53 op=UNLOAD Dec 13 00:25:52.992000 audit: BPF prog-id=81 op=LOAD Dec 13 00:25:52.992000 audit: BPF prog-id=49 op=UNLOAD Dec 13 00:25:52.992000 audit: BPF prog-id=82 op=LOAD Dec 13 00:25:52.992000 audit: BPF prog-id=83 op=LOAD Dec 13 00:25:52.992000 audit: BPF prog-id=47 op=UNLOAD Dec 13 00:25:52.992000 audit: BPF prog-id=48 op=UNLOAD Dec 13 00:25:52.994000 audit: BPF prog-id=84 op=LOAD Dec 13 00:25:52.994000 audit: BPF prog-id=57 op=UNLOAD Dec 13 00:25:52.994000 audit: BPF prog-id=85 op=LOAD Dec 13 00:25:52.994000 audit: BPF prog-id=86 op=LOAD Dec 13 00:25:52.994000 audit: BPF prog-id=58 op=UNLOAD Dec 13 00:25:52.994000 audit: BPF prog-id=59 op=UNLOAD Dec 13 00:25:53.019310 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 00:25:53.019566 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 00:25:53.020114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:53.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:53.020189 systemd[1]: kubelet.service: Consumed 180ms CPU time, 98.7M memory peak. Dec 13 00:25:53.022824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:53.274913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:53.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:53.281543 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:25:53.333737 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:25:53.333737 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:25:53.333737 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:25:53.334153 kubelet[2429]: I1213 00:25:53.333806 2429 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:25:53.697418 kubelet[2429]: I1213 00:25:53.697352 2429 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 13 00:25:53.697418 kubelet[2429]: I1213 00:25:53.697409 2429 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:25:53.697694 kubelet[2429]: I1213 00:25:53.697666 2429 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 00:25:53.723610 kubelet[2429]: E1213 00:25:53.723556 2429 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 13 00:25:53.726972 kubelet[2429]: I1213 00:25:53.726909 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:25:53.736507 kubelet[2429]: I1213 00:25:53.736471 2429 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:25:53.742524 kubelet[2429]: I1213 00:25:53.742469 2429 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 00:25:53.742784 kubelet[2429]: I1213 00:25:53.742746 2429 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:25:53.742995 kubelet[2429]: I1213 00:25:53.742777 2429 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:25:53.743110 kubelet[2429]: I1213 00:25:53.743006 2429 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:25:53.743110 kubelet[2429]: I1213 00:25:53.743017 2429 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 00:25:53.766265 kubelet[2429]: I1213 00:25:53.766231 2429 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:25:54.627272 kubelet[2429]: I1213 00:25:54.627201 2429 kubelet.go:480] "Attempting to sync node with API server" Dec 13 00:25:54.627272 kubelet[2429]: I1213 00:25:54.627275 2429 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:25:54.627895 kubelet[2429]: I1213 00:25:54.627339 2429 kubelet.go:386] "Adding apiserver pod source" Dec 13 00:25:54.627895 kubelet[2429]: I1213 00:25:54.627400 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:25:54.654671 kubelet[2429]: I1213 00:25:54.654624 2429 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:25:54.655215 kubelet[2429]: I1213 00:25:54.655185 2429 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 00:25:54.660323 kubelet[2429]: W1213 00:25:54.660282 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 00:25:54.661909 kubelet[2429]: E1213 00:25:54.661738 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 00:25:54.661909 kubelet[2429]: E1213 00:25:54.661817 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 13 00:25:54.663662 kubelet[2429]: I1213 00:25:54.663624 2429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 00:25:54.663730 kubelet[2429]: I1213 00:25:54.663714 2429 server.go:1289] "Started kubelet" Dec 13 00:25:54.663874 kubelet[2429]: I1213 00:25:54.663824 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:25:54.665065 kubelet[2429]: I1213 00:25:54.665048 2429 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:25:54.665198 kubelet[2429]: I1213 00:25:54.665178 2429 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:25:54.665615 kubelet[2429]: I1213 00:25:54.665582 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:25:54.669363 kubelet[2429]: I1213 00:25:54.669290 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:25:54.670310 kubelet[2429]: I1213 00:25:54.670271 2429 server.go:317] "Adding debug handlers to kubelet server" Dec 13 00:25:54.671679 kubelet[2429]: I1213 00:25:54.671651 2429 factory.go:223] Registration of the systemd container factory successfully Dec 13 00:25:54.671810 kubelet[2429]: I1213 00:25:54.671782 2429 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:25:54.673695 kubelet[2429]: I1213 00:25:54.673614 2429 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 00:25:54.673849 kubelet[2429]: I1213 00:25:54.673829 2429 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 00:25:54.673949 kubelet[2429]: I1213 00:25:54.673934 2429 reconciler.go:26] "Reconciler: start to sync state" Dec 13 00:25:54.674113 kubelet[2429]: E1213 00:25:54.674081 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:25:54.674452 kubelet[2429]: E1213 00:25:54.674374 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 13 00:25:54.674549 kubelet[2429]: E1213 00:25:54.674527 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Dec 13 00:25:54.676337 kubelet[2429]: I1213 00:25:54.675722 2429 factory.go:223] Registration of the containerd container factory successfully Dec 13 00:25:54.677089 kubelet[2429]: E1213 00:25:54.674948 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18809eb51e49d0d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:25:54.663657688 +0000 UTC m=+1.377179404,LastTimestamp:2025-12-13 00:25:54.663657688 +0000 UTC m=+1.377179404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:25:54.677719 kubelet[2429]: E1213 00:25:54.677700 2429 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:25:54.681000 audit[2448]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.683980 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 13 00:25:54.684029 kernel: audit: type=1325 audit(1765585554.681:339): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.681000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd30fac40 a2=0 a3=0 items=0 ppid=2429 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.694713 kernel: audit: type=1300 audit(1765585554.681:339): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd30fac40 a2=0 a3=0 items=0 ppid=2429 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:25:54.698499 kubelet[2429]: I1213 00:25:54.695458 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:25:54.698499 kubelet[2429]: I1213 00:25:54.695475 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:25:54.698499 kubelet[2429]: I1213 00:25:54.695503 2429 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:25:54.699111 kernel: audit: type=1327 audit(1765585554.681:339): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:25:54.684000 audit[2450]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.702731 kubelet[2429]: I1213 00:25:54.702484 2429 policy_none.go:49] "None policy: Start" Dec 13 00:25:54.702731 kubelet[2429]: I1213 00:25:54.702517 2429 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 00:25:54.702731 kubelet[2429]: I1213 00:25:54.702539 2429 state_mem.go:35] "Initializing new in-memory state store" Dec 13 00:25:54.703419 kernel: audit: type=1325 audit(1765585554.684:340): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.703461 kernel: audit: type=1300 audit(1765585554.684:340): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3aec6150 a2=0 a3=0 items=0 ppid=2429 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.684000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3aec6150 a2=0 a3=0 items=0 ppid=2429 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.715057 kernel: audit: type=1327 audit(1765585554.684:340): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:25:54.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:25:54.715316 kubelet[2429]: I1213 00:25:54.714789 2429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 13 00:25:54.686000 audit[2452]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.717904 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 00:25:54.719709 kernel: audit: type=1325 audit(1765585554.686:341): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.719759 kubelet[2429]: I1213 00:25:54.716597 2429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 13 00:25:54.719759 kubelet[2429]: I1213 00:25:54.716623 2429 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 13 00:25:54.719759 kubelet[2429]: I1213 00:25:54.716653 2429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:25:54.719759 kubelet[2429]: I1213 00:25:54.716667 2429 kubelet.go:2436] "Starting kubelet main sync loop" Dec 13 00:25:54.719759 kubelet[2429]: E1213 00:25:54.716721 2429 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:25:54.719759 kubelet[2429]: E1213 00:25:54.717217 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 13 00:25:54.686000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe8de19330 a2=0 a3=0 items=0 ppid=2429 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:25:54.730226 kernel: audit: type=1300 audit(1765585554.686:341): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe8de19330 a2=0 a3=0 items=0 ppid=2429 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.730295 kernel: audit: type=1327 audit(1765585554.686:341): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:25:54.730332 kernel: audit: type=1325 audit(1765585554.690:342): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.690000 audit[2455]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.690000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffed84f3040 a2=0 a3=0 items=0 ppid=2429 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:25:54.713000 audit[2460]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.713000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff675688a0 a2=0 a3=0 items=0 ppid=2429 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 00:25:54.714000 audit[2461]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:54.714000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef2506580 a2=0 a3=0 items=0 ppid=2429 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.714000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:25:54.715000 audit[2462]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.715000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff51e39ca0 a2=0 a3=0 items=0 ppid=2429 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 00:25:54.718000 audit[2464]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.718000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb19bbf40 a2=0 a3=0 items=0 ppid=2429 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 00:25:54.718000 audit[2463]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:54.718000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6244f490 a2=0 a3=0 items=0 ppid=2429 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.718000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 00:25:54.719000 audit[2466]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:54.719000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe32a0a110 a2=0 a3=0 items=0 ppid=2429 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 00:25:54.719000 audit[2467]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:54.719000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3d9ee1f0 a2=0 a3=0 items=0 ppid=2429 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.719000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 00:25:54.720000 audit[2468]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:54.720000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6e000030 a2=0 a3=0 items=0 ppid=2429 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 00:25:54.738750 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 00:25:54.742076 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 00:25:54.755287 kubelet[2429]: E1213 00:25:54.755259 2429 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 00:25:54.755546 kubelet[2429]: I1213 00:25:54.755519 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:25:54.755593 kubelet[2429]: I1213 00:25:54.755535 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:25:54.755788 kubelet[2429]: I1213 00:25:54.755757 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:25:54.756265 kubelet[2429]: E1213 00:25:54.756198 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:25:54.756265 kubelet[2429]: E1213 00:25:54.756237 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 00:25:54.832977 systemd[1]: Created slice kubepods-burstable-pode197405f513e0a52a9b18b708e4ceb0d.slice - libcontainer container kubepods-burstable-pode197405f513e0a52a9b18b708e4ceb0d.slice. Dec 13 00:25:54.858032 kubelet[2429]: I1213 00:25:54.857942 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:54.858562 kubelet[2429]: E1213 00:25:54.858529 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 13 00:25:54.861302 kubelet[2429]: E1213 00:25:54.861239 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:54.864790 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 13 00:25:54.866796 kubelet[2429]: E1213 00:25:54.866768 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:54.869786 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 13 00:25:54.871735 kubelet[2429]: E1213 00:25:54.871687 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:54.875179 kubelet[2429]: E1213 00:25:54.875121 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Dec 13 00:25:54.975801 kubelet[2429]: I1213 00:25:54.975646 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:54.975801 kubelet[2429]: I1213 00:25:54.975697 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:54.975801 kubelet[2429]: I1213 00:25:54.975721 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:54.975801 kubelet[2429]: I1213 00:25:54.975745 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:54.976033 kubelet[2429]: I1213 00:25:54.975836 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:54.976033 kubelet[2429]: I1213 00:25:54.975909 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:54.976033 kubelet[2429]: I1213 00:25:54.975929 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:54.976033 kubelet[2429]: I1213 00:25:54.975946 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:54.976033 kubelet[2429]: I1213 00:25:54.975988 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:55.060192 kubelet[2429]: I1213 00:25:55.060142 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:55.060639 kubelet[2429]: E1213 00:25:55.060605 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 13 00:25:55.162814 kubelet[2429]: E1213 00:25:55.162768 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:55.163661 containerd[1623]: time="2025-12-13T00:25:55.163621078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e197405f513e0a52a9b18b708e4ceb0d,Namespace:kube-system,Attempt:0,}" Dec 13 00:25:55.167973 kubelet[2429]: E1213 00:25:55.167919 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:55.168502 containerd[1623]: time="2025-12-13T00:25:55.168461455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 13 00:25:55.172868 kubelet[2429]: E1213 00:25:55.172828 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:55.173410 containerd[1623]: time="2025-12-13T00:25:55.173341808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 13 00:25:55.223964 containerd[1623]: time="2025-12-13T00:25:55.223530189Z" level=info msg="connecting to shim fcfbe0ab7cb4def60e6ced691a2dea6d1396e1a5b6a1a6eb3e0862874ba41bee" address="unix:///run/containerd/s/cb15f65587c82f03422c0972f1eaca4b3d6ee9d33513d22e038e13c961c87485" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:25:55.225229 containerd[1623]: time="2025-12-13T00:25:55.225188499Z" level=info msg="connecting to shim 0c83bde3e9d3e201b810c96d25799fc88649f6934814d6708349d6a6ce376a05" address="unix:///run/containerd/s/748a163502e0ef237a63ad5e3af0d58cad6aff9479c0768d5abac7db16d45a02" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:25:55.261971 containerd[1623]: time="2025-12-13T00:25:55.260300837Z" level=info msg="connecting to shim 6ed3f1eee822711c43c7a32831f6a297b1efa989648312ed2910cb7e7e521892" address="unix:///run/containerd/s/30e39e5cab2343720b37c510ca127a38c8547aa05008e4f3c2dc67f1ff7b6dbb" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:25:55.280690 kubelet[2429]: E1213 00:25:55.280424 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Dec 13 00:25:55.314668 systemd[1]: Started cri-containerd-0c83bde3e9d3e201b810c96d25799fc88649f6934814d6708349d6a6ce376a05.scope - libcontainer container 0c83bde3e9d3e201b810c96d25799fc88649f6934814d6708349d6a6ce376a05. Dec 13 00:25:55.316738 systemd[1]: Started cri-containerd-fcfbe0ab7cb4def60e6ced691a2dea6d1396e1a5b6a1a6eb3e0862874ba41bee.scope - libcontainer container fcfbe0ab7cb4def60e6ced691a2dea6d1396e1a5b6a1a6eb3e0862874ba41bee. Dec 13 00:25:55.365015 systemd[1]: Started cri-containerd-6ed3f1eee822711c43c7a32831f6a297b1efa989648312ed2910cb7e7e521892.scope - libcontainer container 6ed3f1eee822711c43c7a32831f6a297b1efa989648312ed2910cb7e7e521892. Dec 13 00:25:55.367000 audit: BPF prog-id=87 op=LOAD Dec 13 00:25:55.368000 audit: BPF prog-id=88 op=LOAD Dec 13 00:25:55.368000 audit[2497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2485 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663666265306162376362346465663630653663656436393161326465 Dec 13 00:25:55.368000 audit: BPF prog-id=88 op=UNLOAD Dec 13 00:25:55.368000 audit[2497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663666265306162376362346465663630653663656436393161326465 Dec 13 00:25:55.368000 audit: BPF prog-id=89 op=LOAD Dec 13 00:25:55.368000 audit[2497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2485 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663666265306162376362346465663630653663656436393161326465 Dec 13 00:25:55.368000 audit: BPF prog-id=90 op=LOAD Dec 13 00:25:55.368000 audit[2497]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2485 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663666265306162376362346465663630653663656436393161326465 Dec 13 00:25:55.368000 audit: BPF prog-id=90 op=UNLOAD Dec 13 00:25:55.368000 audit[2497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663666265306162376362346465663630653663656436393161326465 Dec 13 00:25:55.368000 audit: BPF prog-id=89 op=UNLOAD Dec 13 00:25:55.368000 audit[2497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663666265306162376362346465663630653663656436393161326465 Dec 13 00:25:55.368000 audit: BPF prog-id=91 op=LOAD Dec 13 00:25:55.368000 audit: BPF prog-id=92 op=LOAD Dec 13 00:25:55.368000 audit[2497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2485 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663666265306162376362346465663630653663656436393161326465 Dec 13 00:25:55.369000 audit: BPF prog-id=93 op=LOAD Dec 13 00:25:55.369000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2486 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383362646533653964336532303162383130633936643235373939 Dec 13 00:25:55.369000 audit: BPF prog-id=93 op=UNLOAD Dec 13 00:25:55.369000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383362646533653964336532303162383130633936643235373939 Dec 13 00:25:55.369000 audit: BPF prog-id=94 op=LOAD Dec 13 00:25:55.369000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2486 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383362646533653964336532303162383130633936643235373939 Dec 13 00:25:55.369000 audit: BPF prog-id=95 op=LOAD Dec 13 00:25:55.369000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2486 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383362646533653964336532303162383130633936643235373939 Dec 13 00:25:55.369000 audit: BPF prog-id=95 op=UNLOAD Dec 13 00:25:55.369000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383362646533653964336532303162383130633936643235373939 Dec 13 00:25:55.369000 audit: BPF prog-id=94 op=UNLOAD Dec 13 00:25:55.369000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383362646533653964336532303162383130633936643235373939 Dec 13 00:25:55.369000 audit: BPF prog-id=96 op=LOAD Dec 13 00:25:55.369000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2486 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383362646533653964336532303162383130633936643235373939 Dec 13 00:25:55.385000 audit: BPF prog-id=97 op=LOAD Dec 13 00:25:55.386000 audit: BPF prog-id=98 op=LOAD Dec 13 00:25:55.386000 audit[2548]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2519 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643366316565653832323731316334336337613332383331663661 Dec 13 00:25:55.386000 audit: BPF prog-id=98 op=UNLOAD Dec 13 00:25:55.386000 audit[2548]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643366316565653832323731316334336337613332383331663661 Dec 13 00:25:55.388000 audit: BPF prog-id=99 op=LOAD Dec 13 00:25:55.388000 audit[2548]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2519 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643366316565653832323731316334336337613332383331663661 Dec 13 00:25:55.388000 audit: BPF prog-id=100 op=LOAD Dec 13 00:25:55.388000 audit[2548]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2519 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643366316565653832323731316334336337613332383331663661 Dec 13 00:25:55.388000 audit: BPF prog-id=100 op=UNLOAD Dec 13 00:25:55.388000 audit[2548]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643366316565653832323731316334336337613332383331663661 Dec 13 00:25:55.388000 audit: BPF prog-id=99 op=UNLOAD Dec 13 00:25:55.388000 audit[2548]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643366316565653832323731316334336337613332383331663661 Dec 13 00:25:55.388000 audit: BPF prog-id=101 op=LOAD Dec 13 00:25:55.388000 audit[2548]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2519 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665643366316565653832323731316334336337613332383331663661 Dec 13 00:25:55.491971 kubelet[2429]: I1213 00:25:55.491894 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:55.493265 kubelet[2429]: E1213 00:25:55.493236 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 13 00:25:55.512359 containerd[1623]: time="2025-12-13T00:25:55.512000203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ed3f1eee822711c43c7a32831f6a297b1efa989648312ed2910cb7e7e521892\"" Dec 13 00:25:55.514920 kubelet[2429]: E1213 00:25:55.514832 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:55.522700 containerd[1623]: time="2025-12-13T00:25:55.522635789Z" level=info msg="CreateContainer within sandbox \"6ed3f1eee822711c43c7a32831f6a297b1efa989648312ed2910cb7e7e521892\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 00:25:55.524233 containerd[1623]: time="2025-12-13T00:25:55.524182059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e197405f513e0a52a9b18b708e4ceb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c83bde3e9d3e201b810c96d25799fc88649f6934814d6708349d6a6ce376a05\"" Dec 13 00:25:55.525258 kubelet[2429]: E1213 00:25:55.525220 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:55.531944 containerd[1623]: time="2025-12-13T00:25:55.531904000Z" level=info msg="CreateContainer within sandbox \"0c83bde3e9d3e201b810c96d25799fc88649f6934814d6708349d6a6ce376a05\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 00:25:55.534132 containerd[1623]: time="2025-12-13T00:25:55.534092554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcfbe0ab7cb4def60e6ced691a2dea6d1396e1a5b6a1a6eb3e0862874ba41bee\"" Dec 13 00:25:55.534835 kubelet[2429]: E1213 00:25:55.534792 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:55.538291 containerd[1623]: time="2025-12-13T00:25:55.538240313Z" level=info msg="Container 738b156ab51d2e129f12617c0c842bbc89bf04010163bfe041ac624c5d13661c: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:25:55.540232 containerd[1623]: time="2025-12-13T00:25:55.540198876Z" level=info msg="CreateContainer within sandbox \"fcfbe0ab7cb4def60e6ced691a2dea6d1396e1a5b6a1a6eb3e0862874ba41bee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 00:25:55.547193 containerd[1623]: time="2025-12-13T00:25:55.547147246Z" level=info msg="CreateContainer within sandbox \"6ed3f1eee822711c43c7a32831f6a297b1efa989648312ed2910cb7e7e521892\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"738b156ab51d2e129f12617c0c842bbc89bf04010163bfe041ac624c5d13661c\"" Dec 13 00:25:55.547926 containerd[1623]: time="2025-12-13T00:25:55.547891142Z" level=info msg="StartContainer for \"738b156ab51d2e129f12617c0c842bbc89bf04010163bfe041ac624c5d13661c\"" Dec 13 00:25:55.549262 containerd[1623]: time="2025-12-13T00:25:55.549208542Z" level=info msg="connecting to shim 738b156ab51d2e129f12617c0c842bbc89bf04010163bfe041ac624c5d13661c" address="unix:///run/containerd/s/30e39e5cab2343720b37c510ca127a38c8547aa05008e4f3c2dc67f1ff7b6dbb" protocol=ttrpc version=3 Dec 13 00:25:55.550248 containerd[1623]: time="2025-12-13T00:25:55.550191095Z" level=info msg="Container 6cd0203b702157bddcc9d687ecfcc07a34e877176232293902df1246dabb2160: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:25:55.557774 containerd[1623]: time="2025-12-13T00:25:55.557713322Z" level=info msg="Container 52b3013316bb38672c9ebf65b3f0237668c89cf094a525a0e6d0da19d9a1f8f1: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:25:55.562420 containerd[1623]: time="2025-12-13T00:25:55.562274756Z" level=info msg="CreateContainer within sandbox \"0c83bde3e9d3e201b810c96d25799fc88649f6934814d6708349d6a6ce376a05\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6cd0203b702157bddcc9d687ecfcc07a34e877176232293902df1246dabb2160\"" Dec 13 00:25:55.563334 containerd[1623]: time="2025-12-13T00:25:55.563289108Z" level=info msg="StartContainer for \"6cd0203b702157bddcc9d687ecfcc07a34e877176232293902df1246dabb2160\"" Dec 13 00:25:55.564589 containerd[1623]: time="2025-12-13T00:25:55.564509497Z" level=info msg="connecting to shim 6cd0203b702157bddcc9d687ecfcc07a34e877176232293902df1246dabb2160" address="unix:///run/containerd/s/748a163502e0ef237a63ad5e3af0d58cad6aff9479c0768d5abac7db16d45a02" protocol=ttrpc version=3 Dec 13 00:25:55.565000 containerd[1623]: time="2025-12-13T00:25:55.564949232Z" level=info msg="CreateContainer within sandbox \"fcfbe0ab7cb4def60e6ced691a2dea6d1396e1a5b6a1a6eb3e0862874ba41bee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"52b3013316bb38672c9ebf65b3f0237668c89cf094a525a0e6d0da19d9a1f8f1\"" Dec 13 00:25:55.565708 containerd[1623]: time="2025-12-13T00:25:55.565669472Z" level=info msg="StartContainer for \"52b3013316bb38672c9ebf65b3f0237668c89cf094a525a0e6d0da19d9a1f8f1\"" Dec 13 00:25:55.567217 containerd[1623]: time="2025-12-13T00:25:55.567185986Z" level=info msg="connecting to shim 52b3013316bb38672c9ebf65b3f0237668c89cf094a525a0e6d0da19d9a1f8f1" address="unix:///run/containerd/s/cb15f65587c82f03422c0972f1eaca4b3d6ee9d33513d22e038e13c961c87485" protocol=ttrpc version=3 Dec 13 00:25:55.573962 systemd[1]: Started cri-containerd-738b156ab51d2e129f12617c0c842bbc89bf04010163bfe041ac624c5d13661c.scope - libcontainer container 738b156ab51d2e129f12617c0c842bbc89bf04010163bfe041ac624c5d13661c. Dec 13 00:25:55.611890 systemd[1]: Started cri-containerd-52b3013316bb38672c9ebf65b3f0237668c89cf094a525a0e6d0da19d9a1f8f1.scope - libcontainer container 52b3013316bb38672c9ebf65b3f0237668c89cf094a525a0e6d0da19d9a1f8f1. Dec 13 00:25:55.615110 systemd[1]: Started cri-containerd-6cd0203b702157bddcc9d687ecfcc07a34e877176232293902df1246dabb2160.scope - libcontainer container 6cd0203b702157bddcc9d687ecfcc07a34e877176232293902df1246dabb2160. Dec 13 00:25:55.624000 audit: BPF prog-id=102 op=LOAD Dec 13 00:25:55.626000 audit: BPF prog-id=103 op=LOAD Dec 13 00:25:55.626000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2519 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386231353661623531643265313239663132363137633063383432 Dec 13 00:25:55.626000 audit: BPF prog-id=103 op=UNLOAD Dec 13 00:25:55.626000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386231353661623531643265313239663132363137633063383432 Dec 13 00:25:55.626000 audit: BPF prog-id=104 op=LOAD Dec 13 00:25:55.626000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2519 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386231353661623531643265313239663132363137633063383432 Dec 13 00:25:55.626000 audit: BPF prog-id=105 op=LOAD Dec 13 00:25:55.626000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2519 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386231353661623531643265313239663132363137633063383432 Dec 13 00:25:55.626000 audit: BPF prog-id=105 op=UNLOAD Dec 13 00:25:55.626000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386231353661623531643265313239663132363137633063383432 Dec 13 00:25:55.626000 audit: BPF prog-id=104 op=UNLOAD Dec 13 00:25:55.626000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386231353661623531643265313239663132363137633063383432 Dec 13 00:25:55.628507 kubelet[2429]: E1213 00:25:55.627827 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 00:25:55.626000 audit: BPF prog-id=106 op=LOAD Dec 13 00:25:55.626000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2519 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733386231353661623531643265313239663132363137633063383432 Dec 13 00:25:55.640000 audit: BPF prog-id=107 op=LOAD Dec 13 00:25:55.641000 audit: BPF prog-id=108 op=LOAD Dec 13 00:25:55.641000 audit[2619]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2485 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623330313333313662623338363732633965626636356233663032 Dec 13 00:25:55.641000 audit: BPF prog-id=108 op=UNLOAD Dec 13 00:25:55.641000 audit[2619]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623330313333313662623338363732633965626636356233663032 Dec 13 00:25:55.641000 audit: BPF prog-id=109 op=LOAD Dec 13 00:25:55.641000 audit[2619]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2485 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623330313333313662623338363732633965626636356233663032 Dec 13 00:25:55.641000 audit: BPF prog-id=110 op=LOAD Dec 13 00:25:55.641000 audit[2619]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2485 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623330313333313662623338363732633965626636356233663032 Dec 13 00:25:55.641000 audit: BPF prog-id=110 op=UNLOAD Dec 13 00:25:55.641000 audit[2619]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623330313333313662623338363732633965626636356233663032 Dec 13 00:25:55.641000 audit: BPF prog-id=109 op=UNLOAD Dec 13 00:25:55.641000 audit[2619]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623330313333313662623338363732633965626636356233663032 Dec 13 00:25:55.641000 audit: BPF prog-id=111 op=LOAD Dec 13 00:25:55.641000 audit[2619]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2485 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623330313333313662623338363732633965626636356233663032 Dec 13 00:25:55.642000 audit: BPF prog-id=112 op=LOAD Dec 13 00:25:55.643000 audit: BPF prog-id=113 op=LOAD Dec 13 00:25:55.643000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2486 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643032303362373032313537626464636339643638376563666363 Dec 13 00:25:55.643000 audit: BPF prog-id=113 op=UNLOAD Dec 13 00:25:55.643000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643032303362373032313537626464636339643638376563666363 Dec 13 00:25:55.643000 audit: BPF prog-id=114 op=LOAD Dec 13 00:25:55.643000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2486 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643032303362373032313537626464636339643638376563666363 Dec 13 00:25:55.643000 audit: BPF prog-id=115 op=LOAD Dec 13 00:25:55.643000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2486 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643032303362373032313537626464636339643638376563666363 Dec 13 00:25:55.643000 audit: BPF prog-id=115 op=UNLOAD Dec 13 00:25:55.643000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643032303362373032313537626464636339643638376563666363 Dec 13 00:25:55.643000 audit: BPF prog-id=114 op=UNLOAD Dec 13 00:25:55.643000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643032303362373032313537626464636339643638376563666363 Dec 13 00:25:55.643000 audit: BPF prog-id=116 op=LOAD Dec 13 00:25:55.643000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2486 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:55.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663643032303362373032313537626464636339643638376563666363 Dec 13 00:25:55.680983 containerd[1623]: time="2025-12-13T00:25:55.680056759Z" level=info msg="StartContainer for \"738b156ab51d2e129f12617c0c842bbc89bf04010163bfe041ac624c5d13661c\" returns successfully" Dec 13 00:25:55.723313 kubelet[2429]: E1213 00:25:55.723259 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 13 00:25:55.726597 kubelet[2429]: E1213 00:25:55.726576 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:55.727075 kubelet[2429]: E1213 00:25:55.726850 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:55.739191 containerd[1623]: time="2025-12-13T00:25:55.739132167Z" level=info msg="StartContainer for \"6cd0203b702157bddcc9d687ecfcc07a34e877176232293902df1246dabb2160\" returns successfully" Dec 13 00:25:55.752602 containerd[1623]: time="2025-12-13T00:25:55.752550531Z" level=info msg="StartContainer for \"52b3013316bb38672c9ebf65b3f0237668c89cf094a525a0e6d0da19d9a1f8f1\" returns successfully" Dec 13 00:25:55.803520 kubelet[2429]: E1213 00:25:55.803468 2429 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 13 00:25:56.296629 kubelet[2429]: I1213 00:25:56.296595 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:56.740943 kubelet[2429]: E1213 00:25:56.740599 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:56.740943 kubelet[2429]: E1213 00:25:56.740742 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:56.743841 kubelet[2429]: E1213 00:25:56.743818 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:56.744234 kubelet[2429]: E1213 00:25:56.743917 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:56.744234 kubelet[2429]: E1213 00:25:56.744047 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:56.744234 kubelet[2429]: E1213 00:25:56.744182 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:57.555017 kubelet[2429]: E1213 00:25:57.554597 2429 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 00:25:57.657687 kubelet[2429]: I1213 00:25:57.657636 2429 apiserver.go:52] "Watching apiserver" Dec 13 00:25:57.674555 kubelet[2429]: I1213 00:25:57.674499 2429 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 00:25:57.702972 kubelet[2429]: E1213 00:25:57.702854 2429 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18809eb51e49d0d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:25:54.663657688 +0000 UTC m=+1.377179404,LastTimestamp:2025-12-13 00:25:54.663657688 +0000 UTC m=+1.377179404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:25:57.703410 kubelet[2429]: I1213 00:25:57.703360 2429 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:25:57.703410 kubelet[2429]: E1213 00:25:57.703405 2429 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 00:25:57.743804 kubelet[2429]: I1213 00:25:57.743770 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:57.744204 kubelet[2429]: I1213 00:25:57.743777 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:57.744204 kubelet[2429]: I1213 00:25:57.743940 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:57.775570 kubelet[2429]: I1213 00:25:57.775516 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:57.895143 kubelet[2429]: E1213 00:25:57.894055 2429 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18809eb51f1fe2f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:25:54.677687028 +0000 UTC m=+1.391208744,LastTimestamp:2025-12-13 00:25:54.677687028 +0000 UTC m=+1.391208744,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:25:57.925306 kubelet[2429]: E1213 00:25:57.925210 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:57.925306 kubelet[2429]: E1213 00:25:57.925243 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:57.925618 kubelet[2429]: I1213 00:25:57.925247 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:57.925618 kubelet[2429]: E1213 00:25:57.925475 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:57.925618 kubelet[2429]: E1213 00:25:57.925536 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:57.925618 kubelet[2429]: E1213 00:25:57.925211 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:57.925830 kubelet[2429]: E1213 00:25:57.925706 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:57.925830 kubelet[2429]: E1213 00:25:57.925708 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:57.927464 kubelet[2429]: E1213 00:25:57.927424 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:57.927464 kubelet[2429]: I1213 00:25:57.927450 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:57.928930 kubelet[2429]: E1213 00:25:57.928898 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:58.744699 kubelet[2429]: I1213 00:25:58.744658 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:58.744699 kubelet[2429]: I1213 00:25:58.744680 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:58.749931 kubelet[2429]: E1213 00:25:58.749904 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:58.750897 kubelet[2429]: E1213 00:25:58.750869 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:59.746135 kubelet[2429]: E1213 00:25:59.746102 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:59.746548 kubelet[2429]: E1213 00:25:59.746110 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:59.753546 systemd[1]: Reload requested from client PID 2711 ('systemctl') (unit session-8.scope)... Dec 13 00:25:59.753564 systemd[1]: Reloading... Dec 13 00:25:59.836411 zram_generator::config[2754]: No configuration found. Dec 13 00:26:00.100144 systemd[1]: Reloading finished in 346 ms. Dec 13 00:26:00.126520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:00.152654 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 00:26:00.153024 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:00.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:00.153093 systemd[1]: kubelet.service: Consumed 1.151s CPU time, 132M memory peak. Dec 13 00:26:00.154427 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 13 00:26:00.154515 kernel: audit: type=1131 audit(1765585560.151:399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:00.155093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:26:00.157000 audit: BPF prog-id=117 op=LOAD Dec 13 00:26:00.161120 kernel: audit: type=1334 audit(1765585560.157:400): prog-id=117 op=LOAD Dec 13 00:26:00.161159 kernel: audit: type=1334 audit(1765585560.157:401): prog-id=67 op=UNLOAD Dec 13 00:26:00.161182 kernel: audit: type=1334 audit(1765585560.157:402): prog-id=118 op=LOAD Dec 13 00:26:00.161203 kernel: audit: type=1334 audit(1765585560.157:403): prog-id=119 op=LOAD Dec 13 00:26:00.161223 kernel: audit: type=1334 audit(1765585560.157:404): prog-id=68 op=UNLOAD Dec 13 00:26:00.161321 kernel: audit: type=1334 audit(1765585560.157:405): prog-id=69 op=UNLOAD Dec 13 00:26:00.157000 audit: BPF prog-id=67 op=UNLOAD Dec 13 00:26:00.157000 audit: BPF prog-id=118 op=LOAD Dec 13 00:26:00.157000 audit: BPF prog-id=119 op=LOAD Dec 13 00:26:00.157000 audit: BPF prog-id=68 op=UNLOAD Dec 13 00:26:00.157000 audit: BPF prog-id=69 op=UNLOAD Dec 13 00:26:00.158000 audit: BPF prog-id=120 op=LOAD Dec 13 00:26:00.170225 kernel: audit: type=1334 audit(1765585560.158:406): prog-id=120 op=LOAD Dec 13 00:26:00.170270 kernel: audit: type=1334 audit(1765585560.158:407): prog-id=73 op=UNLOAD Dec 13 00:26:00.158000 audit: BPF prog-id=73 op=UNLOAD Dec 13 00:26:00.159000 audit: BPF prog-id=121 op=LOAD Dec 13 00:26:00.173307 kernel: audit: type=1334 audit(1765585560.159:408): prog-id=121 op=LOAD Dec 13 00:26:00.159000 audit: BPF prog-id=70 op=UNLOAD Dec 13 00:26:00.159000 audit: BPF prog-id=122 op=LOAD Dec 13 00:26:00.159000 audit: BPF prog-id=123 op=LOAD Dec 13 00:26:00.159000 audit: BPF prog-id=71 op=UNLOAD Dec 13 00:26:00.159000 audit: BPF prog-id=72 op=UNLOAD Dec 13 00:26:00.160000 audit: BPF prog-id=124 op=LOAD Dec 13 00:26:00.160000 audit: BPF prog-id=74 op=UNLOAD Dec 13 00:26:00.161000 audit: BPF prog-id=125 op=LOAD Dec 13 00:26:00.161000 audit: BPF prog-id=75 op=UNLOAD Dec 13 00:26:00.161000 audit: BPF prog-id=126 op=LOAD Dec 13 00:26:00.161000 audit: BPF prog-id=127 op=LOAD Dec 13 00:26:00.161000 audit: BPF prog-id=76 op=UNLOAD Dec 13 00:26:00.161000 audit: BPF prog-id=77 op=UNLOAD Dec 13 00:26:00.162000 audit: BPF prog-id=128 op=LOAD Dec 13 00:26:00.162000 audit: BPF prog-id=81 op=UNLOAD Dec 13 00:26:00.190000 audit: BPF prog-id=129 op=LOAD Dec 13 00:26:00.190000 audit: BPF prog-id=84 op=UNLOAD Dec 13 00:26:00.190000 audit: BPF prog-id=130 op=LOAD Dec 13 00:26:00.190000 audit: BPF prog-id=131 op=LOAD Dec 13 00:26:00.190000 audit: BPF prog-id=85 op=UNLOAD Dec 13 00:26:00.190000 audit: BPF prog-id=86 op=UNLOAD Dec 13 00:26:00.190000 audit: BPF prog-id=132 op=LOAD Dec 13 00:26:00.190000 audit: BPF prog-id=133 op=LOAD Dec 13 00:26:00.190000 audit: BPF prog-id=82 op=UNLOAD Dec 13 00:26:00.190000 audit: BPF prog-id=83 op=UNLOAD Dec 13 00:26:00.191000 audit: BPF prog-id=134 op=LOAD Dec 13 00:26:00.191000 audit: BPF prog-id=78 op=UNLOAD Dec 13 00:26:00.192000 audit: BPF prog-id=135 op=LOAD Dec 13 00:26:00.192000 audit: BPF prog-id=136 op=LOAD Dec 13 00:26:00.192000 audit: BPF prog-id=79 op=UNLOAD Dec 13 00:26:00.192000 audit: BPF prog-id=80 op=UNLOAD Dec 13 00:26:00.398034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:26:00.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:00.407774 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:26:00.459224 kubelet[2802]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:26:00.459224 kubelet[2802]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:26:00.459224 kubelet[2802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:26:00.459660 kubelet[2802]: I1213 00:26:00.459264 2802 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:26:00.465071 kubelet[2802]: I1213 00:26:00.465037 2802 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 13 00:26:00.465071 kubelet[2802]: I1213 00:26:00.465056 2802 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:26:00.465232 kubelet[2802]: I1213 00:26:00.465211 2802 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 00:26:00.467963 kubelet[2802]: I1213 00:26:00.467942 2802 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 13 00:26:00.471266 kubelet[2802]: I1213 00:26:00.471169 2802 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:26:00.475325 kubelet[2802]: I1213 00:26:00.475294 2802 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:26:00.481021 kubelet[2802]: I1213 00:26:00.480984 2802 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 00:26:00.481265 kubelet[2802]: I1213 00:26:00.481210 2802 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:26:00.481458 kubelet[2802]: I1213 00:26:00.481265 2802 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:26:00.481548 kubelet[2802]: I1213 00:26:00.481461 2802 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:26:00.481548 kubelet[2802]: I1213 00:26:00.481471 2802 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 00:26:00.481548 kubelet[2802]: I1213 00:26:00.481532 2802 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:26:00.483951 kubelet[2802]: I1213 00:26:00.483906 2802 kubelet.go:480] "Attempting to sync node with API server" Dec 13 00:26:00.483951 kubelet[2802]: I1213 00:26:00.483931 2802 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:26:00.484027 kubelet[2802]: I1213 00:26:00.483970 2802 kubelet.go:386] "Adding apiserver pod source" Dec 13 00:26:00.484055 kubelet[2802]: I1213 00:26:00.484016 2802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:26:00.487736 kubelet[2802]: I1213 00:26:00.485607 2802 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:26:00.487736 kubelet[2802]: I1213 00:26:00.486592 2802 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 00:26:00.492402 kubelet[2802]: I1213 00:26:00.491743 2802 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 00:26:00.492402 kubelet[2802]: I1213 00:26:00.491791 2802 server.go:1289] "Started kubelet" Dec 13 00:26:00.492919 kubelet[2802]: I1213 00:26:00.492724 2802 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:26:00.493848 kubelet[2802]: I1213 00:26:00.493813 2802 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:26:00.493912 kubelet[2802]: I1213 00:26:00.493887 2802 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:26:00.494679 kubelet[2802]: I1213 00:26:00.494645 2802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:26:00.495682 kubelet[2802]: I1213 00:26:00.495495 2802 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:26:00.496284 kubelet[2802]: I1213 00:26:00.496240 2802 server.go:317] "Adding debug handlers to kubelet server" Dec 13 00:26:00.499351 kubelet[2802]: I1213 00:26:00.498536 2802 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 00:26:00.499351 kubelet[2802]: I1213 00:26:00.498658 2802 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 00:26:00.499351 kubelet[2802]: I1213 00:26:00.498825 2802 reconciler.go:26] "Reconciler: start to sync state" Dec 13 00:26:00.499351 kubelet[2802]: E1213 00:26:00.499290 2802 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:26:00.499915 kubelet[2802]: I1213 00:26:00.499897 2802 factory.go:223] Registration of the systemd container factory successfully Dec 13 00:26:00.500127 kubelet[2802]: I1213 00:26:00.500095 2802 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:26:00.503281 kubelet[2802]: I1213 00:26:00.503239 2802 factory.go:223] Registration of the containerd container factory successfully Dec 13 00:26:00.515518 kubelet[2802]: I1213 00:26:00.515453 2802 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 13 00:26:00.517152 kubelet[2802]: I1213 00:26:00.517105 2802 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 13 00:26:00.517195 kubelet[2802]: I1213 00:26:00.517157 2802 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 13 00:26:00.517195 kubelet[2802]: I1213 00:26:00.517184 2802 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:26:00.517259 kubelet[2802]: I1213 00:26:00.517215 2802 kubelet.go:2436] "Starting kubelet main sync loop" Dec 13 00:26:00.517315 kubelet[2802]: E1213 00:26:00.517272 2802 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:26:00.540369 kubelet[2802]: I1213 00:26:00.540326 2802 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:26:00.540369 kubelet[2802]: I1213 00:26:00.540343 2802 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:26:00.540369 kubelet[2802]: I1213 00:26:00.540361 2802 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:26:00.540545 kubelet[2802]: I1213 00:26:00.540494 2802 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 00:26:00.540545 kubelet[2802]: I1213 00:26:00.540510 2802 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 00:26:00.540545 kubelet[2802]: I1213 00:26:00.540526 2802 policy_none.go:49] "None policy: Start" Dec 13 00:26:00.540545 kubelet[2802]: I1213 00:26:00.540536 2802 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 00:26:00.540622 kubelet[2802]: I1213 00:26:00.540551 2802 state_mem.go:35] "Initializing new in-memory state store" Dec 13 00:26:00.540661 kubelet[2802]: I1213 00:26:00.540641 2802 state_mem.go:75] "Updated machine memory state" Dec 13 00:26:00.544631 kubelet[2802]: E1213 00:26:00.544595 2802 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 00:26:00.544799 kubelet[2802]: I1213 00:26:00.544787 2802 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:26:00.544832 kubelet[2802]: I1213 00:26:00.544800 2802 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:26:00.546395 kubelet[2802]: I1213 00:26:00.545672 2802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:26:00.546395 kubelet[2802]: E1213 00:26:00.545821 2802 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:26:00.618086 kubelet[2802]: I1213 00:26:00.618033 2802 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:00.618086 kubelet[2802]: I1213 00:26:00.618064 2802 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:00.618269 kubelet[2802]: I1213 00:26:00.618237 2802 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:00.651754 kubelet[2802]: I1213 00:26:00.651609 2802 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:26:00.700211 kubelet[2802]: I1213 00:26:00.700169 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:00.700211 kubelet[2802]: I1213 00:26:00.700201 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:00.700211 kubelet[2802]: I1213 00:26:00.700222 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e197405f513e0a52a9b18b708e4ceb0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e197405f513e0a52a9b18b708e4ceb0d\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:00.700469 kubelet[2802]: I1213 00:26:00.700237 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:00.700469 kubelet[2802]: I1213 00:26:00.700272 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:00.700469 kubelet[2802]: I1213 00:26:00.700288 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:00.700469 kubelet[2802]: I1213 00:26:00.700300 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:00.700469 kubelet[2802]: I1213 00:26:00.700314 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:00.700596 kubelet[2802]: I1213 00:26:00.700329 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:00.748697 kubelet[2802]: E1213 00:26:00.748646 2802 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:00.748858 kubelet[2802]: E1213 00:26:00.748799 2802 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:26:00.750568 kubelet[2802]: I1213 00:26:00.750546 2802 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 13 00:26:00.750637 kubelet[2802]: I1213 00:26:00.750625 2802 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:26:01.044386 kubelet[2802]: E1213 00:26:01.044352 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:01.049469 kubelet[2802]: E1213 00:26:01.049405 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:01.049469 kubelet[2802]: E1213 00:26:01.049451 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:01.484781 kubelet[2802]: I1213 00:26:01.484626 2802 apiserver.go:52] "Watching apiserver" Dec 13 00:26:01.498776 kubelet[2802]: I1213 00:26:01.498728 2802 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 00:26:01.528897 kubelet[2802]: I1213 00:26:01.528838 2802 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:01.529502 kubelet[2802]: I1213 00:26:01.529452 2802 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:01.529830 kubelet[2802]: E1213 00:26:01.529799 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:01.537995 kubelet[2802]: E1213 00:26:01.537943 2802 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:26:01.538508 kubelet[2802]: E1213 00:26:01.538153 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:01.538508 kubelet[2802]: E1213 00:26:01.538275 2802 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 00:26:01.538821 kubelet[2802]: E1213 00:26:01.538797 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:01.550806 kubelet[2802]: I1213 00:26:01.550745 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.550727288 podStartE2EDuration="1.550727288s" podCreationTimestamp="2025-12-13 00:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:01.550704996 +0000 UTC m=+1.128983500" watchObservedRunningTime="2025-12-13 00:26:01.550727288 +0000 UTC m=+1.129005792" Dec 13 00:26:01.561412 kubelet[2802]: I1213 00:26:01.560487 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.560469768 podStartE2EDuration="3.560469768s" podCreationTimestamp="2025-12-13 00:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:01.560264463 +0000 UTC m=+1.138542967" watchObservedRunningTime="2025-12-13 00:26:01.560469768 +0000 UTC m=+1.138748272" Dec 13 00:26:01.588403 kubelet[2802]: I1213 00:26:01.587744 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.587724941 podStartE2EDuration="3.587724941s" podCreationTimestamp="2025-12-13 00:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:01.573527301 +0000 UTC m=+1.151805805" watchObservedRunningTime="2025-12-13 00:26:01.587724941 +0000 UTC m=+1.166003445" Dec 13 00:26:02.530106 kubelet[2802]: E1213 00:26:02.530071 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:02.532395 kubelet[2802]: E1213 00:26:02.531136 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:03.532859 kubelet[2802]: E1213 00:26:03.532816 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:05.567570 kubelet[2802]: E1213 00:26:05.567514 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:06.015548 kubelet[2802]: I1213 00:26:06.015436 2802 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 00:26:06.015902 containerd[1623]: time="2025-12-13T00:26:06.015852138Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 00:26:06.016299 kubelet[2802]: I1213 00:26:06.016084 2802 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 00:26:06.536690 kubelet[2802]: E1213 00:26:06.536645 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:06.910063 systemd[1]: Created slice kubepods-besteffort-podd7f6b0f9_bacd_48e8_ac8f_2287182f94ea.slice - libcontainer container kubepods-besteffort-podd7f6b0f9_bacd_48e8_ac8f_2287182f94ea.slice. Dec 13 00:26:06.940497 kubelet[2802]: I1213 00:26:06.940443 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7f6b0f9-bacd-48e8-ac8f-2287182f94ea-lib-modules\") pod \"kube-proxy-6dbwc\" (UID: \"d7f6b0f9-bacd-48e8-ac8f-2287182f94ea\") " pod="kube-system/kube-proxy-6dbwc" Dec 13 00:26:06.940497 kubelet[2802]: I1213 00:26:06.940476 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndq2n\" (UniqueName: \"kubernetes.io/projected/d7f6b0f9-bacd-48e8-ac8f-2287182f94ea-kube-api-access-ndq2n\") pod \"kube-proxy-6dbwc\" (UID: \"d7f6b0f9-bacd-48e8-ac8f-2287182f94ea\") " pod="kube-system/kube-proxy-6dbwc" Dec 13 00:26:06.940497 kubelet[2802]: I1213 00:26:06.940496 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7f6b0f9-bacd-48e8-ac8f-2287182f94ea-kube-proxy\") pod \"kube-proxy-6dbwc\" (UID: \"d7f6b0f9-bacd-48e8-ac8f-2287182f94ea\") " pod="kube-system/kube-proxy-6dbwc" Dec 13 00:26:06.940497 kubelet[2802]: I1213 00:26:06.940511 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7f6b0f9-bacd-48e8-ac8f-2287182f94ea-xtables-lock\") pod \"kube-proxy-6dbwc\" (UID: \"d7f6b0f9-bacd-48e8-ac8f-2287182f94ea\") " pod="kube-system/kube-proxy-6dbwc" Dec 13 00:26:07.173880 systemd[1]: Created slice kubepods-besteffort-pod47965fdb_2287_492f_a673_c7f0f6b25287.slice - libcontainer container kubepods-besteffort-pod47965fdb_2287_492f_a673_c7f0f6b25287.slice. Dec 13 00:26:07.222217 kubelet[2802]: E1213 00:26:07.222145 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:07.222888 containerd[1623]: time="2025-12-13T00:26:07.222837224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dbwc,Uid:d7f6b0f9-bacd-48e8-ac8f-2287182f94ea,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:07.242704 kubelet[2802]: I1213 00:26:07.242539 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk8bv\" (UniqueName: \"kubernetes.io/projected/47965fdb-2287-492f-a673-c7f0f6b25287-kube-api-access-nk8bv\") pod \"tigera-operator-7dcd859c48-mqtf8\" (UID: \"47965fdb-2287-492f-a673-c7f0f6b25287\") " pod="tigera-operator/tigera-operator-7dcd859c48-mqtf8" Dec 13 00:26:07.242704 kubelet[2802]: I1213 00:26:07.242588 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47965fdb-2287-492f-a673-c7f0f6b25287-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mqtf8\" (UID: \"47965fdb-2287-492f-a673-c7f0f6b25287\") " pod="tigera-operator/tigera-operator-7dcd859c48-mqtf8" Dec 13 00:26:07.257153 containerd[1623]: time="2025-12-13T00:26:07.257086268Z" level=info msg="connecting to shim 8530e3b2d45535e82f432b35553e67f82939d401f5259ac46a9aaa1547ab9008" address="unix:///run/containerd/s/20f137666aba15f2204bfaa9ec5b734d497764f98849fedeafe63ec50697d89d" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:07.302689 systemd[1]: Started cri-containerd-8530e3b2d45535e82f432b35553e67f82939d401f5259ac46a9aaa1547ab9008.scope - libcontainer container 8530e3b2d45535e82f432b35553e67f82939d401f5259ac46a9aaa1547ab9008. Dec 13 00:26:07.313000 audit: BPF prog-id=137 op=LOAD Dec 13 00:26:07.317842 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 13 00:26:07.317944 kernel: audit: type=1334 audit(1765585567.313:441): prog-id=137 op=LOAD Dec 13 00:26:07.314000 audit: BPF prog-id=138 op=LOAD Dec 13 00:26:07.319518 kernel: audit: type=1334 audit(1765585567.314:442): prog-id=138 op=LOAD Dec 13 00:26:07.314000 audit[2876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.325793 kernel: audit: type=1300 audit(1765585567.314:442): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.332474 kernel: audit: type=1327 audit(1765585567.314:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.314000 audit: BPF prog-id=138 op=UNLOAD Dec 13 00:26:07.334604 kernel: audit: type=1334 audit(1765585567.314:443): prog-id=138 op=UNLOAD Dec 13 00:26:07.334657 kernel: audit: type=1300 audit(1765585567.314:443): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.314000 audit[2876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.340765 kernel: audit: type=1327 audit(1765585567.314:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.348820 kernel: audit: type=1334 audit(1765585567.314:444): prog-id=139 op=LOAD Dec 13 00:26:07.314000 audit: BPF prog-id=139 op=LOAD Dec 13 00:26:07.314000 audit[2876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.352407 containerd[1623]: time="2025-12-13T00:26:07.352357173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dbwc,Uid:d7f6b0f9-bacd-48e8-ac8f-2287182f94ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8530e3b2d45535e82f432b35553e67f82939d401f5259ac46a9aaa1547ab9008\"" Dec 13 00:26:07.353278 kubelet[2802]: E1213 00:26:07.353252 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:07.355284 kernel: audit: type=1300 audit(1765585567.314:444): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.361473 kernel: audit: type=1327 audit(1765585567.314:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.314000 audit: BPF prog-id=140 op=LOAD Dec 13 00:26:07.314000 audit[2876]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.314000 audit: BPF prog-id=140 op=UNLOAD Dec 13 00:26:07.314000 audit[2876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.314000 audit: BPF prog-id=139 op=UNLOAD Dec 13 00:26:07.314000 audit[2876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.314000 audit: BPF prog-id=141 op=LOAD Dec 13 00:26:07.314000 audit[2876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2865 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835333065336232643435353335653832663433326233353535336536 Dec 13 00:26:07.362616 containerd[1623]: time="2025-12-13T00:26:07.362564718Z" level=info msg="CreateContainer within sandbox \"8530e3b2d45535e82f432b35553e67f82939d401f5259ac46a9aaa1547ab9008\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 00:26:07.374928 containerd[1623]: time="2025-12-13T00:26:07.374883521Z" level=info msg="Container a83b917797c46a933cc24e8ad537930fd5c5288c4a4bf18aa0dead275b648ec7: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:07.383205 containerd[1623]: time="2025-12-13T00:26:07.383152703Z" level=info msg="CreateContainer within sandbox \"8530e3b2d45535e82f432b35553e67f82939d401f5259ac46a9aaa1547ab9008\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a83b917797c46a933cc24e8ad537930fd5c5288c4a4bf18aa0dead275b648ec7\"" Dec 13 00:26:07.383824 containerd[1623]: time="2025-12-13T00:26:07.383782133Z" level=info msg="StartContainer for \"a83b917797c46a933cc24e8ad537930fd5c5288c4a4bf18aa0dead275b648ec7\"" Dec 13 00:26:07.385292 containerd[1623]: time="2025-12-13T00:26:07.385263058Z" level=info msg="connecting to shim a83b917797c46a933cc24e8ad537930fd5c5288c4a4bf18aa0dead275b648ec7" address="unix:///run/containerd/s/20f137666aba15f2204bfaa9ec5b734d497764f98849fedeafe63ec50697d89d" protocol=ttrpc version=3 Dec 13 00:26:07.415666 systemd[1]: Started cri-containerd-a83b917797c46a933cc24e8ad537930fd5c5288c4a4bf18aa0dead275b648ec7.scope - libcontainer container a83b917797c46a933cc24e8ad537930fd5c5288c4a4bf18aa0dead275b648ec7. Dec 13 00:26:07.478974 containerd[1623]: time="2025-12-13T00:26:07.477846987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mqtf8,Uid:47965fdb-2287-492f-a673-c7f0f6b25287,Namespace:tigera-operator,Attempt:0,}" Dec 13 00:26:07.479000 audit: BPF prog-id=142 op=LOAD Dec 13 00:26:07.479000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2865 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336239313737393763343661393333636332346538616435333739 Dec 13 00:26:07.479000 audit: BPF prog-id=143 op=LOAD Dec 13 00:26:07.479000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2865 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336239313737393763343661393333636332346538616435333739 Dec 13 00:26:07.479000 audit: BPF prog-id=143 op=UNLOAD Dec 13 00:26:07.479000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2865 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336239313737393763343661393333636332346538616435333739 Dec 13 00:26:07.479000 audit: BPF prog-id=142 op=UNLOAD Dec 13 00:26:07.479000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2865 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336239313737393763343661393333636332346538616435333739 Dec 13 00:26:07.479000 audit: BPF prog-id=144 op=LOAD Dec 13 00:26:07.479000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2865 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138336239313737393763343661393333636332346538616435333739 Dec 13 00:26:07.500367 containerd[1623]: time="2025-12-13T00:26:07.500261906Z" level=info msg="connecting to shim bb634a8e6b7b403417e93c494315dc4838af209746e4ce26900e6979f9f8711c" address="unix:///run/containerd/s/2d2bf9c97f71b18207c573ed582f224c2dde8d47cee1c8e2d1ee17f6ff34733c" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:07.503139 containerd[1623]: time="2025-12-13T00:26:07.503115245Z" level=info msg="StartContainer for \"a83b917797c46a933cc24e8ad537930fd5c5288c4a4bf18aa0dead275b648ec7\" returns successfully" Dec 13 00:26:07.532555 systemd[1]: Started cri-containerd-bb634a8e6b7b403417e93c494315dc4838af209746e4ce26900e6979f9f8711c.scope - libcontainer container bb634a8e6b7b403417e93c494315dc4838af209746e4ce26900e6979f9f8711c. Dec 13 00:26:07.545815 kubelet[2802]: E1213 00:26:07.545770 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:07.549000 audit: BPF prog-id=145 op=LOAD Dec 13 00:26:07.549000 audit: BPF prog-id=146 op=LOAD Dec 13 00:26:07.549000 audit[2953]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2936 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363334613865366237623430333431376539336334393433313564 Dec 13 00:26:07.549000 audit: BPF prog-id=146 op=UNLOAD Dec 13 00:26:07.549000 audit[2953]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2936 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363334613865366237623430333431376539336334393433313564 Dec 13 00:26:07.549000 audit: BPF prog-id=147 op=LOAD Dec 13 00:26:07.549000 audit[2953]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2936 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363334613865366237623430333431376539336334393433313564 Dec 13 00:26:07.550000 audit: BPF prog-id=148 op=LOAD Dec 13 00:26:07.550000 audit[2953]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2936 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363334613865366237623430333431376539336334393433313564 Dec 13 00:26:07.550000 audit: BPF prog-id=148 op=UNLOAD Dec 13 00:26:07.550000 audit[2953]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2936 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363334613865366237623430333431376539336334393433313564 Dec 13 00:26:07.550000 audit: BPF prog-id=147 op=UNLOAD Dec 13 00:26:07.550000 audit[2953]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2936 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363334613865366237623430333431376539336334393433313564 Dec 13 00:26:07.550000 audit: BPF prog-id=149 op=LOAD Dec 13 00:26:07.550000 audit[2953]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2936 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363334613865366237623430333431376539336334393433313564 Dec 13 00:26:07.559187 kubelet[2802]: I1213 00:26:07.559121 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6dbwc" podStartSLOduration=1.559100195 podStartE2EDuration="1.559100195s" podCreationTimestamp="2025-12-13 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:07.558961836 +0000 UTC m=+7.137240340" watchObservedRunningTime="2025-12-13 00:26:07.559100195 +0000 UTC m=+7.137378699" Dec 13 00:26:07.591658 containerd[1623]: time="2025-12-13T00:26:07.591596133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mqtf8,Uid:47965fdb-2287-492f-a673-c7f0f6b25287,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bb634a8e6b7b403417e93c494315dc4838af209746e4ce26900e6979f9f8711c\"" Dec 13 00:26:07.594601 containerd[1623]: time="2025-12-13T00:26:07.594565007Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 13 00:26:07.649000 audit[3013]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.649000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc86b820a0 a2=0 a3=7ffc86b8208c items=0 ppid=2915 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 00:26:07.652000 audit[3015]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.652000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff62a18400 a2=0 a3=7fff62a183ec items=0 ppid=2915 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.652000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 00:26:07.652000 audit[3016]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.652000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe80623b80 a2=0 a3=7ffe80623b6c items=0 ppid=2915 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.652000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 00:26:07.654000 audit[3017]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.654000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5986f5f0 a2=0 a3=7fff5986f5dc items=0 ppid=2915 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 00:26:07.655000 audit[3019]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.655000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0fa992f0 a2=0 a3=7ffd0fa992dc items=0 ppid=2915 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 00:26:07.657000 audit[3020]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.657000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2e61a1a0 a2=0 a3=7ffd2e61a18c items=0 ppid=2915 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 00:26:07.731307 kubelet[2802]: E1213 00:26:07.731171 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:07.753000 audit[3022]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.753000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd06a9ef80 a2=0 a3=7ffd06a9ef6c items=0 ppid=2915 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 00:26:07.757000 audit[3024]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.757000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffce3bdd5e0 a2=0 a3=7ffce3bdd5cc items=0 ppid=2915 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 00:26:07.762000 audit[3027]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.762000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff4f822750 a2=0 a3=7fff4f82273c items=0 ppid=2915 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 00:26:07.764000 audit[3028]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.764000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd77a00b60 a2=0 a3=7ffd77a00b4c items=0 ppid=2915 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 00:26:07.767000 audit[3030]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.767000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc43d10dd0 a2=0 a3=7ffc43d10dbc items=0 ppid=2915 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 00:26:07.769000 audit[3031]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.769000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0e0cedd0 a2=0 a3=7ffe0e0cedbc items=0 ppid=2915 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 00:26:07.773000 audit[3033]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.773000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcfa8fb980 a2=0 a3=7ffcfa8fb96c items=0 ppid=2915 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 00:26:07.778000 audit[3036]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.778000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff7d427d10 a2=0 a3=7fff7d427cfc items=0 ppid=2915 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 00:26:07.780000 audit[3037]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.780000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca6763a30 a2=0 a3=7ffca6763a1c items=0 ppid=2915 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 00:26:07.784000 audit[3039]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.784000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe5b4310e0 a2=0 a3=7ffe5b4310cc items=0 ppid=2915 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 00:26:07.785000 audit[3040]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.785000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd093cc380 a2=0 a3=7ffd093cc36c items=0 ppid=2915 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 00:26:07.788000 audit[3042]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.788000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2a5a0e00 a2=0 a3=7ffd2a5a0dec items=0 ppid=2915 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 00:26:07.793000 audit[3045]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.793000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdb647eb0 a2=0 a3=7ffcdb647e9c items=0 ppid=2915 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 00:26:07.798000 audit[3048]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.798000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5035cf50 a2=0 a3=7ffc5035cf3c items=0 ppid=2915 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 00:26:07.800000 audit[3049]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.800000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf427c870 a2=0 a3=7ffcf427c85c items=0 ppid=2915 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 00:26:07.803000 audit[3051]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.803000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffceef337f0 a2=0 a3=7ffceef337dc items=0 ppid=2915 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:26:07.808000 audit[3054]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.808000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe62fc7920 a2=0 a3=7ffe62fc790c items=0 ppid=2915 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:26:07.809000 audit[3055]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.809000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd542aa50 a2=0 a3=7fffd542aa3c items=0 ppid=2915 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 00:26:07.813000 audit[3057]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:26:07.813000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff5f5c2de0 a2=0 a3=7fff5f5c2dcc items=0 ppid=2915 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 00:26:07.836000 audit[3063]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:07.836000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf0cfd760 a2=0 a3=7ffcf0cfd74c items=0 ppid=2915 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:07.854000 audit[3063]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:07.854000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcf0cfd760 a2=0 a3=7ffcf0cfd74c items=0 ppid=2915 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:07.856000 audit[3068]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.856000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffdd7e98d0 a2=0 a3=7fffdd7e98bc items=0 ppid=2915 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 00:26:07.860000 audit[3070]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.860000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff6d2774d0 a2=0 a3=7fff6d2774bc items=0 ppid=2915 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 00:26:07.865000 audit[3073]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.865000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff78fca1d0 a2=0 a3=7fff78fca1bc items=0 ppid=2915 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 00:26:07.868000 audit[3074]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.868000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff23e037b0 a2=0 a3=7fff23e0379c items=0 ppid=2915 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 00:26:07.872000 audit[3076]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.872000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe04a13ec0 a2=0 a3=7ffe04a13eac items=0 ppid=2915 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 00:26:07.874000 audit[3077]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.874000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf27cab00 a2=0 a3=7ffdf27caaec items=0 ppid=2915 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 00:26:07.878000 audit[3079]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.878000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe159a3ea0 a2=0 a3=7ffe159a3e8c items=0 ppid=2915 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 00:26:07.883000 audit[3082]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.883000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffddcde6ed0 a2=0 a3=7ffddcde6ebc items=0 ppid=2915 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 00:26:07.885000 audit[3083]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.885000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2519df50 a2=0 a3=7ffc2519df3c items=0 ppid=2915 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 00:26:07.888000 audit[3085]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.888000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd3a8f500 a2=0 a3=7ffcd3a8f4ec items=0 ppid=2915 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 00:26:07.889000 audit[3086]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.889000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffad0a1af0 a2=0 a3=7fffad0a1adc items=0 ppid=2915 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 00:26:07.893000 audit[3088]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.893000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1e3a9b60 a2=0 a3=7ffe1e3a9b4c items=0 ppid=2915 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 00:26:07.898000 audit[3091]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.898000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff800b5b40 a2=0 a3=7fff800b5b2c items=0 ppid=2915 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 00:26:07.903000 audit[3094]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.903000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda0afb6c0 a2=0 a3=7ffda0afb6ac items=0 ppid=2915 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.903000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 00:26:07.905000 audit[3095]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.905000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2ad86aa0 a2=0 a3=7ffd2ad86a8c items=0 ppid=2915 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 00:26:07.908000 audit[3097]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.908000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffee1ca6160 a2=0 a3=7ffee1ca614c items=0 ppid=2915 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:26:07.913000 audit[3100]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.913000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff7280f370 a2=0 a3=7fff7280f35c items=0 ppid=2915 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:26:07.915000 audit[3101]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.915000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff84c60f0 a2=0 a3=7ffff84c60dc items=0 ppid=2915 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 00:26:07.918000 audit[3103]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.918000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcce9239d0 a2=0 a3=7ffcce9239bc items=0 ppid=2915 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.918000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 00:26:07.920000 audit[3104]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.920000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff781eef10 a2=0 a3=7fff781eeefc items=0 ppid=2915 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:26:07.923000 audit[3106]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.923000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe66da6fa0 a2=0 a3=7ffe66da6f8c items=0 ppid=2915 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:26:07.928000 audit[3109]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:26:07.928000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcd44df8a0 a2=0 a3=7ffcd44df88c items=0 ppid=2915 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:26:07.933000 audit[3111]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 00:26:07.933000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffef2ee1500 a2=0 a3=7ffef2ee14ec items=0 ppid=2915 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.933000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:07.934000 audit[3111]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 00:26:07.934000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffef2ee1500 a2=0 a3=7ffef2ee14ec items=0 ppid=2915 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.934000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:08.548451 kubelet[2802]: E1213 00:26:08.548414 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:09.550260 kubelet[2802]: E1213 00:26:09.550196 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:09.569292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983694608.mount: Deactivated successfully. Dec 13 00:26:09.984903 containerd[1623]: time="2025-12-13T00:26:09.984774489Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:09.986095 containerd[1623]: time="2025-12-13T00:26:09.986030053Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 13 00:26:09.987256 containerd[1623]: time="2025-12-13T00:26:09.987217861Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:09.989696 containerd[1623]: time="2025-12-13T00:26:09.989623911Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:09.990269 containerd[1623]: time="2025-12-13T00:26:09.990218997Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.395609826s" Dec 13 00:26:09.990269 containerd[1623]: time="2025-12-13T00:26:09.990256667Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 13 00:26:09.996874 containerd[1623]: time="2025-12-13T00:26:09.996834618Z" level=info msg="CreateContainer within sandbox \"bb634a8e6b7b403417e93c494315dc4838af209746e4ce26900e6979f9f8711c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 00:26:10.006678 containerd[1623]: time="2025-12-13T00:26:10.006630684Z" level=info msg="Container 2d035c69dae178d27dccef69bd21702f31c01cf72874db487d3fba0e21161378: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:10.015346 containerd[1623]: time="2025-12-13T00:26:10.015294786Z" level=info msg="CreateContainer within sandbox \"bb634a8e6b7b403417e93c494315dc4838af209746e4ce26900e6979f9f8711c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2d035c69dae178d27dccef69bd21702f31c01cf72874db487d3fba0e21161378\"" Dec 13 00:26:10.016181 containerd[1623]: time="2025-12-13T00:26:10.015974270Z" level=info msg="StartContainer for \"2d035c69dae178d27dccef69bd21702f31c01cf72874db487d3fba0e21161378\"" Dec 13 00:26:10.016860 containerd[1623]: time="2025-12-13T00:26:10.016822361Z" level=info msg="connecting to shim 2d035c69dae178d27dccef69bd21702f31c01cf72874db487d3fba0e21161378" address="unix:///run/containerd/s/2d2bf9c97f71b18207c573ed582f224c2dde8d47cee1c8e2d1ee17f6ff34733c" protocol=ttrpc version=3 Dec 13 00:26:10.040551 systemd[1]: Started cri-containerd-2d035c69dae178d27dccef69bd21702f31c01cf72874db487d3fba0e21161378.scope - libcontainer container 2d035c69dae178d27dccef69bd21702f31c01cf72874db487d3fba0e21161378. Dec 13 00:26:10.054000 audit: BPF prog-id=150 op=LOAD Dec 13 00:26:10.055000 audit: BPF prog-id=151 op=LOAD Dec 13 00:26:10.055000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2936 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264303335633639646165313738643237646363656636396264323137 Dec 13 00:26:10.055000 audit: BPF prog-id=151 op=UNLOAD Dec 13 00:26:10.055000 audit[3120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2936 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264303335633639646165313738643237646363656636396264323137 Dec 13 00:26:10.055000 audit: BPF prog-id=152 op=LOAD Dec 13 00:26:10.055000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2936 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264303335633639646165313738643237646363656636396264323137 Dec 13 00:26:10.055000 audit: BPF prog-id=153 op=LOAD Dec 13 00:26:10.055000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2936 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264303335633639646165313738643237646363656636396264323137 Dec 13 00:26:10.055000 audit: BPF prog-id=153 op=UNLOAD Dec 13 00:26:10.055000 audit[3120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2936 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264303335633639646165313738643237646363656636396264323137 Dec 13 00:26:10.055000 audit: BPF prog-id=152 op=UNLOAD Dec 13 00:26:10.055000 audit[3120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2936 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264303335633639646165313738643237646363656636396264323137 Dec 13 00:26:10.055000 audit: BPF prog-id=154 op=LOAD Dec 13 00:26:10.055000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2936 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264303335633639646165313738643237646363656636396264323137 Dec 13 00:26:10.078521 containerd[1623]: time="2025-12-13T00:26:10.078366400Z" level=info msg="StartContainer for \"2d035c69dae178d27dccef69bd21702f31c01cf72874db487d3fba0e21161378\" returns successfully" Dec 13 00:26:10.295998 kubelet[2802]: E1213 00:26:10.295929 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:10.388944 update_engine[1590]: I20251213 00:26:10.388818 1590 update_attempter.cc:509] Updating boot flags... Dec 13 00:26:10.556419 kubelet[2802]: E1213 00:26:10.554799 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:11.556808 kubelet[2802]: E1213 00:26:11.556771 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:16.201955 sudo[1839]: pam_unix(sudo:session): session closed for user root Dec 13 00:26:16.200000 audit[1839]: USER_END pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:26:16.203682 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 13 00:26:16.203746 kernel: audit: type=1106 audit(1765585576.200:521): pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:26:16.211977 kernel: audit: type=1104 audit(1765585576.200:522): pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:26:16.200000 audit[1839]: CRED_DISP pid=1839 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:26:16.212157 sshd[1838]: Connection closed by 10.0.0.1 port 59000 Dec 13 00:26:16.212709 sshd-session[1834]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:16.216000 audit[1834]: USER_END pid=1834 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:16.216000 audit[1834]: CRED_DISP pid=1834 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:16.223147 systemd-logind[1589]: Session 8 logged out. Waiting for processes to exit. Dec 13 00:26:16.224606 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:59000.service: Deactivated successfully. Dec 13 00:26:16.232087 kernel: audit: type=1106 audit(1765585576.216:523): pid=1834 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:16.232165 kernel: audit: type=1104 audit(1765585576.216:524): pid=1834 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:16.232190 kernel: audit: type=1131 audit(1765585576.223:525): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.117:22-10.0.0.1:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:16.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.117:22-10.0.0.1:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:16.234947 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 00:26:16.237220 systemd[1]: session-8.scope: Consumed 5.744s CPU time, 192M memory peak. Dec 13 00:26:16.239767 systemd-logind[1589]: Removed session 8. Dec 13 00:26:17.635416 kernel: audit: type=1325 audit(1765585577.629:526): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:17.629000 audit[3233]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:17.629000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe46047f80 a2=0 a3=7ffe46047f6c items=0 ppid=2915 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:17.644404 kernel: audit: type=1300 audit(1765585577.629:526): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe46047f80 a2=0 a3=7ffe46047f6c items=0 ppid=2915 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:17.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:17.648404 kernel: audit: type=1327 audit(1765585577.629:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:17.660000 audit[3233]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:17.666410 kernel: audit: type=1325 audit(1765585577.660:527): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:17.660000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe46047f80 a2=0 a3=0 items=0 ppid=2915 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:17.675522 kernel: audit: type=1300 audit(1765585577.660:527): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe46047f80 a2=0 a3=0 items=0 ppid=2915 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:17.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:17.721000 audit[3235]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:17.721000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdbdfb0c20 a2=0 a3=7ffdbdfb0c0c items=0 ppid=2915 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:17.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:17.727000 audit[3235]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:17.727000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbdfb0c20 a2=0 a3=0 items=0 ppid=2915 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:17.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:21.446251 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 00:26:21.446371 kernel: audit: type=1325 audit(1765585581.439:530): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:21.439000 audit[3238]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:21.439000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc127f75e0 a2=0 a3=7ffc127f75cc items=0 ppid=2915 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:21.455471 kernel: audit: type=1300 audit(1765585581.439:530): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc127f75e0 a2=0 a3=7ffc127f75cc items=0 ppid=2915 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:21.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:21.459420 kernel: audit: type=1327 audit(1765585581.439:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:21.458000 audit[3238]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:21.458000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc127f75e0 a2=0 a3=0 items=0 ppid=2915 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:21.470045 kernel: audit: type=1325 audit(1765585581.458:531): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:21.470138 kernel: audit: type=1300 audit(1765585581.458:531): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc127f75e0 a2=0 a3=0 items=0 ppid=2915 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:21.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:21.473469 kernel: audit: type=1327 audit(1765585581.458:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:21.484000 audit[3240]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:21.484000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3c946490 a2=0 a3=7ffc3c94647c items=0 ppid=2915 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:21.496940 kernel: audit: type=1325 audit(1765585581.484:532): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:21.497051 kernel: audit: type=1300 audit(1765585581.484:532): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3c946490 a2=0 a3=7ffc3c94647c items=0 ppid=2915 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:21.497086 kernel: audit: type=1327 audit(1765585581.484:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:21.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:21.499000 audit[3240]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:21.499000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3c946490 a2=0 a3=0 items=0 ppid=2915 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:21.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:21.504415 kernel: audit: type=1325 audit(1765585581.499:533): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:23.448000 audit[3242]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:23.448000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdb919d490 a2=0 a3=7ffdb919d47c items=0 ppid=2915 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:23.454000 audit[3242]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:23.454000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb919d490 a2=0 a3=0 items=0 ppid=2915 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:23.474689 kubelet[2802]: I1213 00:26:23.474371 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mqtf8" podStartSLOduration=14.076516071 podStartE2EDuration="16.474348674s" podCreationTimestamp="2025-12-13 00:26:07 +0000 UTC" firstStartedPulling="2025-12-13 00:26:07.59333969 +0000 UTC m=+7.171618195" lastFinishedPulling="2025-12-13 00:26:09.991172294 +0000 UTC m=+9.569450798" observedRunningTime="2025-12-13 00:26:10.591440209 +0000 UTC m=+10.169718713" watchObservedRunningTime="2025-12-13 00:26:23.474348674 +0000 UTC m=+23.052627178" Dec 13 00:26:23.486000 audit[3244]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:23.486000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc7564d630 a2=0 a3=7ffc7564d61c items=0 ppid=2915 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:23.494000 audit[3244]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:23.494000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7564d630 a2=0 a3=0 items=0 ppid=2915 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:23.500725 systemd[1]: Created slice kubepods-besteffort-pod97e5f1ce_db48_41a8_95b1_ab795b0919cd.slice - libcontainer container kubepods-besteffort-pod97e5f1ce_db48_41a8_95b1_ab795b0919cd.slice. Dec 13 00:26:23.547184 kubelet[2802]: I1213 00:26:23.547116 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/97e5f1ce-db48-41a8-95b1-ab795b0919cd-typha-certs\") pod \"calico-typha-76fbd75f44-xgszn\" (UID: \"97e5f1ce-db48-41a8-95b1-ab795b0919cd\") " pod="calico-system/calico-typha-76fbd75f44-xgszn" Dec 13 00:26:23.547184 kubelet[2802]: I1213 00:26:23.547181 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntfw\" (UniqueName: \"kubernetes.io/projected/97e5f1ce-db48-41a8-95b1-ab795b0919cd-kube-api-access-nntfw\") pod \"calico-typha-76fbd75f44-xgszn\" (UID: \"97e5f1ce-db48-41a8-95b1-ab795b0919cd\") " pod="calico-system/calico-typha-76fbd75f44-xgszn" Dec 13 00:26:23.547374 kubelet[2802]: I1213 00:26:23.547207 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97e5f1ce-db48-41a8-95b1-ab795b0919cd-tigera-ca-bundle\") pod \"calico-typha-76fbd75f44-xgszn\" (UID: \"97e5f1ce-db48-41a8-95b1-ab795b0919cd\") " pod="calico-system/calico-typha-76fbd75f44-xgszn" Dec 13 00:26:23.581993 systemd[1]: Created slice kubepods-besteffort-pod7fea11ac_ae8b_4e01_904f_9fe79c063c28.slice - libcontainer container kubepods-besteffort-pod7fea11ac_ae8b_4e01_904f_9fe79c063c28.slice. Dec 13 00:26:23.648228 kubelet[2802]: I1213 00:26:23.648161 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fea11ac-ae8b-4e01-904f-9fe79c063c28-tigera-ca-bundle\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648228 kubelet[2802]: I1213 00:26:23.648208 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-cni-bin-dir\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648228 kubelet[2802]: I1213 00:26:23.648245 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-lib-modules\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648525 kubelet[2802]: I1213 00:26:23.648324 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-xtables-lock\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648525 kubelet[2802]: I1213 00:26:23.648395 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-var-lib-calico\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648525 kubelet[2802]: I1213 00:26:23.648439 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-flexvol-driver-host\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648525 kubelet[2802]: I1213 00:26:23.648454 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7fea11ac-ae8b-4e01-904f-9fe79c063c28-node-certs\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648525 kubelet[2802]: I1213 00:26:23.648483 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-cni-net-dir\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648751 kubelet[2802]: I1213 00:26:23.648496 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-cni-log-dir\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648751 kubelet[2802]: I1213 00:26:23.648512 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-policysync\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648751 kubelet[2802]: I1213 00:26:23.648524 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7fea11ac-ae8b-4e01-904f-9fe79c063c28-var-run-calico\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.648751 kubelet[2802]: I1213 00:26:23.648538 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9vc\" (UniqueName: \"kubernetes.io/projected/7fea11ac-ae8b-4e01-904f-9fe79c063c28-kube-api-access-mp9vc\") pod \"calico-node-ds7r8\" (UID: \"7fea11ac-ae8b-4e01-904f-9fe79c063c28\") " pod="calico-system/calico-node-ds7r8" Dec 13 00:26:23.731738 kubelet[2802]: E1213 00:26:23.731581 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:23.749167 kubelet[2802]: I1213 00:26:23.749108 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs77h\" (UniqueName: \"kubernetes.io/projected/0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77-kube-api-access-qs77h\") pod \"csi-node-driver-hxl5n\" (UID: \"0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77\") " pod="calico-system/csi-node-driver-hxl5n" Dec 13 00:26:23.749401 kubelet[2802]: I1213 00:26:23.749346 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77-kubelet-dir\") pod \"csi-node-driver-hxl5n\" (UID: \"0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77\") " pod="calico-system/csi-node-driver-hxl5n" Dec 13 00:26:23.749450 kubelet[2802]: I1213 00:26:23.749405 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77-varrun\") pod \"csi-node-driver-hxl5n\" (UID: \"0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77\") " pod="calico-system/csi-node-driver-hxl5n" Dec 13 00:26:23.749604 kubelet[2802]: I1213 00:26:23.749575 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77-socket-dir\") pod \"csi-node-driver-hxl5n\" (UID: \"0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77\") " pod="calico-system/csi-node-driver-hxl5n" Dec 13 00:26:23.749678 kubelet[2802]: I1213 00:26:23.749667 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77-registration-dir\") pod \"csi-node-driver-hxl5n\" (UID: \"0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77\") " pod="calico-system/csi-node-driver-hxl5n" Dec 13 00:26:23.750763 kubelet[2802]: E1213 00:26:23.750722 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.750763 kubelet[2802]: W1213 00:26:23.750757 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.751748 kubelet[2802]: E1213 00:26:23.751714 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.752147 kubelet[2802]: E1213 00:26:23.752110 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.752205 kubelet[2802]: W1213 00:26:23.752135 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.752205 kubelet[2802]: E1213 00:26:23.752184 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.752707 kubelet[2802]: E1213 00:26:23.752539 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.752707 kubelet[2802]: W1213 00:26:23.752557 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.752707 kubelet[2802]: E1213 00:26:23.752571 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.755726 kubelet[2802]: E1213 00:26:23.755677 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.755726 kubelet[2802]: W1213 00:26:23.755697 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.755726 kubelet[2802]: E1213 00:26:23.755717 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.755968 kubelet[2802]: E1213 00:26:23.755946 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.755968 kubelet[2802]: W1213 00:26:23.755962 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.756073 kubelet[2802]: E1213 00:26:23.755972 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.756162 kubelet[2802]: E1213 00:26:23.756144 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.756162 kubelet[2802]: W1213 00:26:23.756157 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.756306 kubelet[2802]: E1213 00:26:23.756165 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.756365 kubelet[2802]: E1213 00:26:23.756345 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.756365 kubelet[2802]: W1213 00:26:23.756354 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.756365 kubelet[2802]: E1213 00:26:23.756363 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.758061 kubelet[2802]: E1213 00:26:23.757940 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.758061 kubelet[2802]: W1213 00:26:23.757993 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.758061 kubelet[2802]: E1213 00:26:23.758017 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.758565 kubelet[2802]: E1213 00:26:23.758551 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.758859 kubelet[2802]: W1213 00:26:23.758708 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.758859 kubelet[2802]: E1213 00:26:23.758725 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.759479 kubelet[2802]: E1213 00:26:23.759462 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.759479 kubelet[2802]: W1213 00:26:23.759475 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.759571 kubelet[2802]: E1213 00:26:23.759486 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.759726 kubelet[2802]: E1213 00:26:23.759709 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.759726 kubelet[2802]: W1213 00:26:23.759717 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.759726 kubelet[2802]: E1213 00:26:23.759726 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.760051 kubelet[2802]: E1213 00:26:23.760036 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.760096 kubelet[2802]: W1213 00:26:23.760057 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.760096 kubelet[2802]: E1213 00:26:23.760068 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.760506 kubelet[2802]: E1213 00:26:23.760494 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.760551 kubelet[2802]: W1213 00:26:23.760503 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.760551 kubelet[2802]: E1213 00:26:23.760522 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.760778 kubelet[2802]: E1213 00:26:23.760764 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.760778 kubelet[2802]: W1213 00:26:23.760774 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.760862 kubelet[2802]: E1213 00:26:23.760782 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.766312 kubelet[2802]: E1213 00:26:23.765578 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.766312 kubelet[2802]: W1213 00:26:23.765596 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.766312 kubelet[2802]: E1213 00:26:23.765626 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.803792 kubelet[2802]: E1213 00:26:23.803751 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:23.804940 containerd[1623]: time="2025-12-13T00:26:23.804893436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76fbd75f44-xgszn,Uid:97e5f1ce-db48-41a8-95b1-ab795b0919cd,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:23.837022 containerd[1623]: time="2025-12-13T00:26:23.836749628Z" level=info msg="connecting to shim ed456bb912e02c6326f149ab98f864beabd3e362c2f6aa1593862a6a4ac8ee22" address="unix:///run/containerd/s/cfa39da0618346c07f207eef40a98e90476596700cbdabb79edf664408cb83f7" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:23.851138 kubelet[2802]: E1213 00:26:23.850997 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.851138 kubelet[2802]: W1213 00:26:23.851020 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.851138 kubelet[2802]: E1213 00:26:23.851042 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.851333 kubelet[2802]: E1213 00:26:23.851307 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.851358 kubelet[2802]: W1213 00:26:23.851334 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.851415 kubelet[2802]: E1213 00:26:23.851365 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.851823 kubelet[2802]: E1213 00:26:23.851789 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.851856 kubelet[2802]: W1213 00:26:23.851823 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.851879 kubelet[2802]: E1213 00:26:23.851852 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.852168 kubelet[2802]: E1213 00:26:23.852150 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.852168 kubelet[2802]: W1213 00:26:23.852164 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.852241 kubelet[2802]: E1213 00:26:23.852176 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.854301 kubelet[2802]: E1213 00:26:23.854257 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.854301 kubelet[2802]: W1213 00:26:23.854275 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.854301 kubelet[2802]: E1213 00:26:23.854288 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.854662 kubelet[2802]: E1213 00:26:23.854644 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.854662 kubelet[2802]: W1213 00:26:23.854659 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.854808 kubelet[2802]: E1213 00:26:23.854670 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.854956 kubelet[2802]: E1213 00:26:23.854938 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.854956 kubelet[2802]: W1213 00:26:23.854953 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.855019 kubelet[2802]: E1213 00:26:23.854965 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.855277 kubelet[2802]: E1213 00:26:23.855257 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.855277 kubelet[2802]: W1213 00:26:23.855271 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.855402 kubelet[2802]: E1213 00:26:23.855284 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.855594 kubelet[2802]: E1213 00:26:23.855577 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.855594 kubelet[2802]: W1213 00:26:23.855592 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.855669 kubelet[2802]: E1213 00:26:23.855603 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.855910 kubelet[2802]: E1213 00:26:23.855893 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.855910 kubelet[2802]: W1213 00:26:23.855908 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.856066 kubelet[2802]: E1213 00:26:23.855920 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.856217 kubelet[2802]: E1213 00:26:23.856191 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.856217 kubelet[2802]: W1213 00:26:23.856206 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.856278 kubelet[2802]: E1213 00:26:23.856218 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.856528 kubelet[2802]: E1213 00:26:23.856510 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.856528 kubelet[2802]: W1213 00:26:23.856526 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.856601 kubelet[2802]: E1213 00:26:23.856539 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.856890 kubelet[2802]: E1213 00:26:23.856862 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.856890 kubelet[2802]: W1213 00:26:23.856876 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.856890 kubelet[2802]: E1213 00:26:23.856886 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.857224 kubelet[2802]: E1213 00:26:23.857197 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.857224 kubelet[2802]: W1213 00:26:23.857211 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.857304 kubelet[2802]: E1213 00:26:23.857221 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.857592 kubelet[2802]: E1213 00:26:23.857575 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.857592 kubelet[2802]: W1213 00:26:23.857590 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.857680 kubelet[2802]: E1213 00:26:23.857601 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.858033 kubelet[2802]: E1213 00:26:23.858013 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.858033 kubelet[2802]: W1213 00:26:23.858027 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.858144 kubelet[2802]: E1213 00:26:23.858039 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.858331 kubelet[2802]: E1213 00:26:23.858308 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.858331 kubelet[2802]: W1213 00:26:23.858323 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.858397 kubelet[2802]: E1213 00:26:23.858334 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.858661 kubelet[2802]: E1213 00:26:23.858644 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.858661 kubelet[2802]: W1213 00:26:23.858657 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.858757 kubelet[2802]: E1213 00:26:23.858671 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.858986 kubelet[2802]: E1213 00:26:23.858948 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.858986 kubelet[2802]: W1213 00:26:23.858964 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.858986 kubelet[2802]: E1213 00:26:23.858975 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.859303 kubelet[2802]: E1213 00:26:23.859255 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.859303 kubelet[2802]: W1213 00:26:23.859271 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.859303 kubelet[2802]: E1213 00:26:23.859282 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.859599 kubelet[2802]: E1213 00:26:23.859569 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.859599 kubelet[2802]: W1213 00:26:23.859585 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.859599 kubelet[2802]: E1213 00:26:23.859595 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.859918 kubelet[2802]: E1213 00:26:23.859900 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.859918 kubelet[2802]: W1213 00:26:23.859914 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.859981 kubelet[2802]: E1213 00:26:23.859926 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.860621 kubelet[2802]: E1213 00:26:23.860573 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.860621 kubelet[2802]: W1213 00:26:23.860588 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.860621 kubelet[2802]: E1213 00:26:23.860600 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.860849 kubelet[2802]: E1213 00:26:23.860831 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.860849 kubelet[2802]: W1213 00:26:23.860846 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.860911 kubelet[2802]: E1213 00:26:23.860858 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.861211 kubelet[2802]: E1213 00:26:23.861193 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.861211 kubelet[2802]: W1213 00:26:23.861208 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.861272 kubelet[2802]: E1213 00:26:23.861220 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.871141 kubelet[2802]: E1213 00:26:23.871104 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:23.871141 kubelet[2802]: W1213 00:26:23.871123 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:23.871141 kubelet[2802]: E1213 00:26:23.871141 2802 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:23.876699 systemd[1]: Started cri-containerd-ed456bb912e02c6326f149ab98f864beabd3e362c2f6aa1593862a6a4ac8ee22.scope - libcontainer container ed456bb912e02c6326f149ab98f864beabd3e362c2f6aa1593862a6a4ac8ee22. Dec 13 00:26:23.886733 kubelet[2802]: E1213 00:26:23.886678 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:23.887877 containerd[1623]: time="2025-12-13T00:26:23.887829889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ds7r8,Uid:7fea11ac-ae8b-4e01-904f-9fe79c063c28,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:23.889000 audit: BPF prog-id=155 op=LOAD Dec 13 00:26:23.890000 audit: BPF prog-id=156 op=LOAD Dec 13 00:26:23.890000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343536626239313265303263363332366631343961623938663836 Dec 13 00:26:23.890000 audit: BPF prog-id=156 op=UNLOAD Dec 13 00:26:23.890000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343536626239313265303263363332366631343961623938663836 Dec 13 00:26:23.890000 audit: BPF prog-id=157 op=LOAD Dec 13 00:26:23.890000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343536626239313265303263363332366631343961623938663836 Dec 13 00:26:23.890000 audit: BPF prog-id=158 op=LOAD Dec 13 00:26:23.890000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343536626239313265303263363332366631343961623938663836 Dec 13 00:26:23.890000 audit: BPF prog-id=158 op=UNLOAD Dec 13 00:26:23.890000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343536626239313265303263363332366631343961623938663836 Dec 13 00:26:23.891000 audit: BPF prog-id=157 op=UNLOAD Dec 13 00:26:23.891000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343536626239313265303263363332366631343961623938663836 Dec 13 00:26:23.891000 audit: BPF prog-id=159 op=LOAD Dec 13 00:26:23.891000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3272 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564343536626239313265303263363332366631343961623938663836 Dec 13 00:26:23.929631 containerd[1623]: time="2025-12-13T00:26:23.929549187Z" level=info msg="connecting to shim 4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93" address="unix:///run/containerd/s/a526cea46d62670bd4466cc60c0623a303050088a368bb90e3741a93e85964a9" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:23.942318 containerd[1623]: time="2025-12-13T00:26:23.941634959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76fbd75f44-xgszn,Uid:97e5f1ce-db48-41a8-95b1-ab795b0919cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed456bb912e02c6326f149ab98f864beabd3e362c2f6aa1593862a6a4ac8ee22\"" Dec 13 00:26:23.942916 kubelet[2802]: E1213 00:26:23.942822 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:23.944363 containerd[1623]: time="2025-12-13T00:26:23.944289095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 13 00:26:23.969838 systemd[1]: Started cri-containerd-4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93.scope - libcontainer container 4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93. Dec 13 00:26:23.989000 audit: BPF prog-id=160 op=LOAD Dec 13 00:26:23.989000 audit: BPF prog-id=161 op=LOAD Dec 13 00:26:23.989000 audit[3355]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3337 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363362316666613338333737316538396166653732363636383634 Dec 13 00:26:23.990000 audit: BPF prog-id=161 op=UNLOAD Dec 13 00:26:23.990000 audit[3355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363362316666613338333737316538396166653732363636383634 Dec 13 00:26:23.990000 audit: BPF prog-id=162 op=LOAD Dec 13 00:26:23.990000 audit[3355]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3337 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363362316666613338333737316538396166653732363636383634 Dec 13 00:26:23.990000 audit: BPF prog-id=163 op=LOAD Dec 13 00:26:23.990000 audit[3355]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3337 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363362316666613338333737316538396166653732363636383634 Dec 13 00:26:23.990000 audit: BPF prog-id=163 op=UNLOAD Dec 13 00:26:23.990000 audit[3355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363362316666613338333737316538396166653732363636383634 Dec 13 00:26:23.990000 audit: BPF prog-id=162 op=UNLOAD Dec 13 00:26:23.990000 audit[3355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363362316666613338333737316538396166653732363636383634 Dec 13 00:26:23.990000 audit: BPF prog-id=164 op=LOAD Dec 13 00:26:23.990000 audit[3355]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3337 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:23.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363362316666613338333737316538396166653732363636383634 Dec 13 00:26:24.012973 containerd[1623]: time="2025-12-13T00:26:24.012882635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ds7r8,Uid:7fea11ac-ae8b-4e01-904f-9fe79c063c28,Namespace:calico-system,Attempt:0,} returns sandbox id \"4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93\"" Dec 13 00:26:24.014103 kubelet[2802]: E1213 00:26:24.014060 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:24.508000 audit[3384]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:24.508000 audit[3384]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffccb00ceb0 a2=0 a3=7ffccb00ce9c items=0 ppid=2915 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:24.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:24.517000 audit[3384]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:24.517000 audit[3384]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccb00ceb0 a2=0 a3=0 items=0 ppid=2915 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:24.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:25.517978 kubelet[2802]: E1213 00:26:25.517929 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:27.518794 kubelet[2802]: E1213 00:26:27.518739 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:28.351323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188654612.mount: Deactivated successfully. Dec 13 00:26:29.518729 kubelet[2802]: E1213 00:26:29.518660 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:31.183557 containerd[1623]: time="2025-12-13T00:26:31.183471130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:31.254280 containerd[1623]: time="2025-12-13T00:26:31.254184992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35231371" Dec 13 00:26:31.419902 containerd[1623]: time="2025-12-13T00:26:31.419815382Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:31.487599 containerd[1623]: time="2025-12-13T00:26:31.487420024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:31.488214 containerd[1623]: time="2025-12-13T00:26:31.488168337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 7.543832405s" Dec 13 00:26:31.488214 containerd[1623]: time="2025-12-13T00:26:31.488212510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 13 00:26:31.489439 containerd[1623]: time="2025-12-13T00:26:31.489394788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 13 00:26:31.518228 kubelet[2802]: E1213 00:26:31.518168 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:31.666120 containerd[1623]: time="2025-12-13T00:26:31.666026156Z" level=info msg="CreateContainer within sandbox \"ed456bb912e02c6326f149ab98f864beabd3e362c2f6aa1593862a6a4ac8ee22\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 00:26:32.667303 containerd[1623]: time="2025-12-13T00:26:32.665342288Z" level=info msg="Container c2a732db4b154d6b6056a2ca98667763238adfe60f571d2f918f326616f93591: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:33.036525 containerd[1623]: time="2025-12-13T00:26:33.036355856Z" level=info msg="CreateContainer within sandbox \"ed456bb912e02c6326f149ab98f864beabd3e362c2f6aa1593862a6a4ac8ee22\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c2a732db4b154d6b6056a2ca98667763238adfe60f571d2f918f326616f93591\"" Dec 13 00:26:33.038778 containerd[1623]: time="2025-12-13T00:26:33.037103547Z" level=info msg="StartContainer for \"c2a732db4b154d6b6056a2ca98667763238adfe60f571d2f918f326616f93591\"" Dec 13 00:26:33.038778 containerd[1623]: time="2025-12-13T00:26:33.038258032Z" level=info msg="connecting to shim c2a732db4b154d6b6056a2ca98667763238adfe60f571d2f918f326616f93591" address="unix:///run/containerd/s/cfa39da0618346c07f207eef40a98e90476596700cbdabb79edf664408cb83f7" protocol=ttrpc version=3 Dec 13 00:26:33.066587 systemd[1]: Started cri-containerd-c2a732db4b154d6b6056a2ca98667763238adfe60f571d2f918f326616f93591.scope - libcontainer container c2a732db4b154d6b6056a2ca98667763238adfe60f571d2f918f326616f93591. Dec 13 00:26:33.101419 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 13 00:26:33.101576 kernel: audit: type=1334 audit(1765585593.098:556): prog-id=165 op=LOAD Dec 13 00:26:33.098000 audit: BPF prog-id=165 op=LOAD Dec 13 00:26:33.104168 kernel: audit: type=1334 audit(1765585593.099:557): prog-id=166 op=LOAD Dec 13 00:26:33.099000 audit: BPF prog-id=166 op=LOAD Dec 13 00:26:33.099000 audit[3395]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.144083 kernel: audit: type=1300 audit(1765585593.099:557): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.144232 kernel: audit: type=1327 audit(1765585593.099:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.144265 kernel: audit: type=1334 audit(1765585593.099:558): prog-id=166 op=UNLOAD Dec 13 00:26:33.099000 audit: BPF prog-id=166 op=UNLOAD Dec 13 00:26:33.145601 kernel: audit: type=1300 audit(1765585593.099:558): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.099000 audit[3395]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.151278 kernel: audit: type=1327 audit(1765585593.099:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.099000 audit: BPF prog-id=167 op=LOAD Dec 13 00:26:33.158444 kernel: audit: type=1334 audit(1765585593.099:559): prog-id=167 op=LOAD Dec 13 00:26:33.158534 kernel: audit: type=1300 audit(1765585593.099:559): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.099000 audit[3395]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.172070 kernel: audit: type=1327 audit(1765585593.099:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.099000 audit: BPF prog-id=168 op=LOAD Dec 13 00:26:33.099000 audit[3395]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.099000 audit: BPF prog-id=168 op=UNLOAD Dec 13 00:26:33.099000 audit[3395]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.099000 audit: BPF prog-id=167 op=UNLOAD Dec 13 00:26:33.099000 audit[3395]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.099000 audit: BPF prog-id=169 op=LOAD Dec 13 00:26:33.099000 audit[3395]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3272 pid=3395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:33.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332613733326462346231353464366236303536613263613938363637 Dec 13 00:26:33.518036 kubelet[2802]: E1213 00:26:33.517964 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:33.616029 containerd[1623]: time="2025-12-13T00:26:33.615917339Z" level=info msg="StartContainer for \"c2a732db4b154d6b6056a2ca98667763238adfe60f571d2f918f326616f93591\" returns successfully" Dec 13 00:26:34.363432 containerd[1623]: time="2025-12-13T00:26:34.363290387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:34.364679 containerd[1623]: time="2025-12-13T00:26:34.364632023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=741" Dec 13 00:26:34.366947 containerd[1623]: time="2025-12-13T00:26:34.366894276Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:34.369967 containerd[1623]: time="2025-12-13T00:26:34.369896595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:34.370929 containerd[1623]: time="2025-12-13T00:26:34.370855995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.881284957s" Dec 13 00:26:34.370929 containerd[1623]: time="2025-12-13T00:26:34.370908423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 13 00:26:34.375991 containerd[1623]: time="2025-12-13T00:26:34.375951410Z" level=info msg="CreateContainer within sandbox \"4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 00:26:34.385843 containerd[1623]: time="2025-12-13T00:26:34.385795050Z" level=info msg="Container 7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:34.395664 containerd[1623]: time="2025-12-13T00:26:34.395599244Z" level=info msg="CreateContainer within sandbox \"4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474\"" Dec 13 00:26:34.396571 containerd[1623]: time="2025-12-13T00:26:34.396538425Z" level=info msg="StartContainer for \"7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474\"" Dec 13 00:26:34.398123 containerd[1623]: time="2025-12-13T00:26:34.398094263Z" level=info msg="connecting to shim 7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474" address="unix:///run/containerd/s/a526cea46d62670bd4466cc60c0623a303050088a368bb90e3741a93e85964a9" protocol=ttrpc version=3 Dec 13 00:26:34.425580 systemd[1]: Started cri-containerd-7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474.scope - libcontainer container 7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474. Dec 13 00:26:34.498000 audit: BPF prog-id=170 op=LOAD Dec 13 00:26:34.498000 audit[3436]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3337 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761326364303032366264363165353836653138393338336462393764 Dec 13 00:26:34.498000 audit: BPF prog-id=171 op=LOAD Dec 13 00:26:34.498000 audit[3436]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3337 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761326364303032366264363165353836653138393338336462393764 Dec 13 00:26:34.498000 audit: BPF prog-id=171 op=UNLOAD Dec 13 00:26:34.498000 audit[3436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761326364303032366264363165353836653138393338336462393764 Dec 13 00:26:34.498000 audit: BPF prog-id=170 op=UNLOAD Dec 13 00:26:34.498000 audit[3436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761326364303032366264363165353836653138393338336462393764 Dec 13 00:26:34.498000 audit: BPF prog-id=172 op=LOAD Dec 13 00:26:34.498000 audit[3436]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3337 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761326364303032366264363165353836653138393338336462393764 Dec 13 00:26:34.533208 systemd[1]: cri-containerd-7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474.scope: Deactivated successfully. Dec 13 00:26:34.541000 audit: BPF prog-id=172 op=UNLOAD Dec 13 00:26:34.546085 containerd[1623]: time="2025-12-13T00:26:34.545940198Z" level=info msg="received container exit event container_id:\"7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474\" id:\"7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474\" pid:3449 exited_at:{seconds:1765585594 nanos:535772531}" Dec 13 00:26:34.548731 containerd[1623]: time="2025-12-13T00:26:34.548688131Z" level=info msg="StartContainer for \"7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474\" returns successfully" Dec 13 00:26:34.576628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a2cd0026bd61e586e189383db97dc055de8f441e0e0664a456f4e118bcfc474-rootfs.mount: Deactivated successfully. Dec 13 00:26:34.691571 kubelet[2802]: E1213 00:26:34.621585 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:34.691571 kubelet[2802]: E1213 00:26:34.621898 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:34.802523 kubelet[2802]: I1213 00:26:34.802440 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76fbd75f44-xgszn" podStartSLOduration=4.256996567 podStartE2EDuration="11.802365052s" podCreationTimestamp="2025-12-13 00:26:23 +0000 UTC" firstStartedPulling="2025-12-13 00:26:23.943812341 +0000 UTC m=+23.522090845" lastFinishedPulling="2025-12-13 00:26:31.489180826 +0000 UTC m=+31.067459330" observedRunningTime="2025-12-13 00:26:34.802275985 +0000 UTC m=+34.380554519" watchObservedRunningTime="2025-12-13 00:26:34.802365052 +0000 UTC m=+34.380643586" Dec 13 00:26:35.517749 kubelet[2802]: E1213 00:26:35.517679 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:35.625962 kubelet[2802]: E1213 00:26:35.625928 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:35.626524 containerd[1623]: time="2025-12-13T00:26:35.626486843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 13 00:26:37.518117 kubelet[2802]: E1213 00:26:37.518068 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:38.171758 kubelet[2802]: I1213 00:26:38.171702 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:26:38.172144 kubelet[2802]: E1213 00:26:38.172119 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:38.209000 audit[3495]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:38.212618 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 00:26:38.212673 kernel: audit: type=1325 audit(1765585598.209:570): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:38.209000 audit[3495]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb687d5b0 a2=0 a3=7ffcb687d59c items=0 ppid=2915 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:38.228621 kernel: audit: type=1300 audit(1765585598.209:570): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb687d5b0 a2=0 a3=7ffcb687d59c items=0 ppid=2915 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.228885 kernel: audit: type=1327 audit(1765585598.209:570): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:38.228951 kernel: audit: type=1325 audit(1765585598.219:571): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:38.219000 audit[3495]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:38.219000 audit[3495]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcb687d5b0 a2=0 a3=7ffcb687d59c items=0 ppid=2915 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.239235 kernel: audit: type=1300 audit(1765585598.219:571): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcb687d5b0 a2=0 a3=7ffcb687d59c items=0 ppid=2915 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:38.243456 kernel: audit: type=1327 audit(1765585598.219:571): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:38.632171 kubelet[2802]: E1213 00:26:38.632128 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:38.745546 containerd[1623]: time="2025-12-13T00:26:38.745446652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:38.746809 containerd[1623]: time="2025-12-13T00:26:38.746769862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 13 00:26:38.748569 containerd[1623]: time="2025-12-13T00:26:38.748499105Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:38.754321 containerd[1623]: time="2025-12-13T00:26:38.752084429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:38.754321 containerd[1623]: time="2025-12-13T00:26:38.753994561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.127455259s" Dec 13 00:26:38.754321 containerd[1623]: time="2025-12-13T00:26:38.754033444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 13 00:26:38.760720 containerd[1623]: time="2025-12-13T00:26:38.760671792Z" level=info msg="CreateContainer within sandbox \"4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 00:26:38.773784 containerd[1623]: time="2025-12-13T00:26:38.773706666Z" level=info msg="Container 6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:38.870552 containerd[1623]: time="2025-12-13T00:26:38.870252013Z" level=info msg="CreateContainer within sandbox \"4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3\"" Dec 13 00:26:38.873976 containerd[1623]: time="2025-12-13T00:26:38.873918008Z" level=info msg="StartContainer for \"6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3\"" Dec 13 00:26:38.875814 containerd[1623]: time="2025-12-13T00:26:38.875786231Z" level=info msg="connecting to shim 6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3" address="unix:///run/containerd/s/a526cea46d62670bd4466cc60c0623a303050088a368bb90e3741a93e85964a9" protocol=ttrpc version=3 Dec 13 00:26:38.901652 systemd[1]: Started cri-containerd-6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3.scope - libcontainer container 6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3. Dec 13 00:26:38.970000 audit: BPF prog-id=173 op=LOAD Dec 13 00:26:38.970000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3337 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.979172 kernel: audit: type=1334 audit(1765585598.970:572): prog-id=173 op=LOAD Dec 13 00:26:38.979268 kernel: audit: type=1300 audit(1765585598.970:572): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3337 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.979304 kernel: audit: type=1327 audit(1765585598.970:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396436653732323630643433373834323831323164383962396464 Dec 13 00:26:38.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396436653732323630643433373834323831323164383962396464 Dec 13 00:26:38.986456 kernel: audit: type=1334 audit(1765585598.970:573): prog-id=174 op=LOAD Dec 13 00:26:38.970000 audit: BPF prog-id=174 op=LOAD Dec 13 00:26:38.970000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3337 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396436653732323630643433373834323831323164383962396464 Dec 13 00:26:38.970000 audit: BPF prog-id=174 op=UNLOAD Dec 13 00:26:38.970000 audit[3500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396436653732323630643433373834323831323164383962396464 Dec 13 00:26:38.970000 audit: BPF prog-id=173 op=UNLOAD Dec 13 00:26:38.970000 audit[3500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396436653732323630643433373834323831323164383962396464 Dec 13 00:26:38.971000 audit: BPF prog-id=175 op=LOAD Dec 13 00:26:38.971000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3337 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663396436653732323630643433373834323831323164383962396464 Dec 13 00:26:39.517958 kubelet[2802]: E1213 00:26:39.517891 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:40.864416 containerd[1623]: time="2025-12-13T00:26:40.863852159Z" level=info msg="StartContainer for \"6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3\" returns successfully" Dec 13 00:26:41.518272 kubelet[2802]: E1213 00:26:41.518212 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:41.821907 containerd[1623]: time="2025-12-13T00:26:41.821857210Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 00:26:41.825206 systemd[1]: cri-containerd-6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3.scope: Deactivated successfully. Dec 13 00:26:41.825726 systemd[1]: cri-containerd-6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3.scope: Consumed 638ms CPU time, 168.6M memory peak, 584K read from disk, 171.3M written to disk. Dec 13 00:26:41.826941 containerd[1623]: time="2025-12-13T00:26:41.826872825Z" level=info msg="received container exit event container_id:\"6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3\" id:\"6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3\" pid:3513 exited_at:{seconds:1765585601 nanos:826541474}" Dec 13 00:26:41.830000 audit: BPF prog-id=175 op=UNLOAD Dec 13 00:26:41.859221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c9d6e72260d4378428121d89b9dd77c5ffcd2d647b957abf6c8c6b5d4678bd3-rootfs.mount: Deactivated successfully. Dec 13 00:26:41.866606 kubelet[2802]: E1213 00:26:41.866541 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:41.924425 kubelet[2802]: I1213 00:26:41.924371 2802 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 13 00:26:42.452101 systemd[1]: Created slice kubepods-besteffort-podd0ccec5a_3eb2_4766_8606_0b092ddbbee0.slice - libcontainer container kubepods-besteffort-podd0ccec5a_3eb2_4766_8606_0b092ddbbee0.slice. Dec 13 00:26:42.460813 systemd[1]: Created slice kubepods-besteffort-pod2103450f_178f_4fe4_be81_451ab8c0d111.slice - libcontainer container kubepods-besteffort-pod2103450f_178f_4fe4_be81_451ab8c0d111.slice. Dec 13 00:26:42.469301 systemd[1]: Created slice kubepods-burstable-pod8b50cafb_741f_496b_9d47_9978f4044509.slice - libcontainer container kubepods-burstable-pod8b50cafb_741f_496b_9d47_9978f4044509.slice. Dec 13 00:26:42.480615 kubelet[2802]: I1213 00:26:42.480567 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b50cafb-741f-496b-9d47-9978f4044509-config-volume\") pod \"coredns-674b8bbfcf-2xd67\" (UID: \"8b50cafb-741f-496b-9d47-9978f4044509\") " pod="kube-system/coredns-674b8bbfcf-2xd67" Dec 13 00:26:42.480615 kubelet[2802]: I1213 00:26:42.480611 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbnrp\" (UniqueName: \"kubernetes.io/projected/8b50cafb-741f-496b-9d47-9978f4044509-kube-api-access-lbnrp\") pod \"coredns-674b8bbfcf-2xd67\" (UID: \"8b50cafb-741f-496b-9d47-9978f4044509\") " pod="kube-system/coredns-674b8bbfcf-2xd67" Dec 13 00:26:42.480812 kubelet[2802]: I1213 00:26:42.480636 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-whisker-ca-bundle\") pod \"whisker-c75f97b5d-hg69l\" (UID: \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\") " pod="calico-system/whisker-c75f97b5d-hg69l" Dec 13 00:26:42.480812 kubelet[2802]: I1213 00:26:42.480658 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2103450f-178f-4fe4-be81-451ab8c0d111-goldmane-ca-bundle\") pod \"goldmane-666569f655-lmfwt\" (UID: \"2103450f-178f-4fe4-be81-451ab8c0d111\") " pod="calico-system/goldmane-666569f655-lmfwt" Dec 13 00:26:42.480812 kubelet[2802]: I1213 00:26:42.480682 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a989c1a-b7fa-40df-8d15-8ca04e746459-config-volume\") pod \"coredns-674b8bbfcf-v228q\" (UID: \"2a989c1a-b7fa-40df-8d15-8ca04e746459\") " pod="kube-system/coredns-674b8bbfcf-v228q" Dec 13 00:26:42.480812 kubelet[2802]: I1213 00:26:42.480705 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2103450f-178f-4fe4-be81-451ab8c0d111-goldmane-key-pair\") pod \"goldmane-666569f655-lmfwt\" (UID: \"2103450f-178f-4fe4-be81-451ab8c0d111\") " pod="calico-system/goldmane-666569f655-lmfwt" Dec 13 00:26:42.480812 kubelet[2802]: I1213 00:26:42.480731 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/baf0c6ad-ada1-4a24-b663-b32f96db48d0-calico-apiserver-certs\") pod \"calico-apiserver-7544896fd5-wwf2v\" (UID: \"baf0c6ad-ada1-4a24-b663-b32f96db48d0\") " pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" Dec 13 00:26:42.480944 kubelet[2802]: I1213 00:26:42.480749 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmgsl\" (UniqueName: \"kubernetes.io/projected/2a989c1a-b7fa-40df-8d15-8ca04e746459-kube-api-access-cmgsl\") pod \"coredns-674b8bbfcf-v228q\" (UID: \"2a989c1a-b7fa-40df-8d15-8ca04e746459\") " pod="kube-system/coredns-674b8bbfcf-v228q" Dec 13 00:26:42.480944 kubelet[2802]: I1213 00:26:42.480768 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-whisker-backend-key-pair\") pod \"whisker-c75f97b5d-hg69l\" (UID: \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\") " pod="calico-system/whisker-c75f97b5d-hg69l" Dec 13 00:26:42.480944 kubelet[2802]: I1213 00:26:42.480784 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv75f\" (UniqueName: \"kubernetes.io/projected/2103450f-178f-4fe4-be81-451ab8c0d111-kube-api-access-cv75f\") pod \"goldmane-666569f655-lmfwt\" (UID: \"2103450f-178f-4fe4-be81-451ab8c0d111\") " pod="calico-system/goldmane-666569f655-lmfwt" Dec 13 00:26:42.480944 kubelet[2802]: I1213 00:26:42.480802 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szwz\" (UniqueName: \"kubernetes.io/projected/baf0c6ad-ada1-4a24-b663-b32f96db48d0-kube-api-access-2szwz\") pod \"calico-apiserver-7544896fd5-wwf2v\" (UID: \"baf0c6ad-ada1-4a24-b663-b32f96db48d0\") " pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" Dec 13 00:26:42.480944 kubelet[2802]: I1213 00:26:42.480821 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cc720b4-f00a-45c7-ad27-5d1040c88fe5-tigera-ca-bundle\") pod \"calico-kube-controllers-679b4bb9cf-2wh8x\" (UID: \"1cc720b4-f00a-45c7-ad27-5d1040c88fe5\") " pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" Dec 13 00:26:42.481065 kubelet[2802]: I1213 00:26:42.480838 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjcw\" (UniqueName: \"kubernetes.io/projected/1cc720b4-f00a-45c7-ad27-5d1040c88fe5-kube-api-access-twjcw\") pod \"calico-kube-controllers-679b4bb9cf-2wh8x\" (UID: \"1cc720b4-f00a-45c7-ad27-5d1040c88fe5\") " pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" Dec 13 00:26:42.481065 kubelet[2802]: I1213 00:26:42.480856 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6lbz\" (UniqueName: \"kubernetes.io/projected/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-kube-api-access-k6lbz\") pod \"whisker-c75f97b5d-hg69l\" (UID: \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\") " pod="calico-system/whisker-c75f97b5d-hg69l" Dec 13 00:26:42.481065 kubelet[2802]: I1213 00:26:42.480872 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2103450f-178f-4fe4-be81-451ab8c0d111-config\") pod \"goldmane-666569f655-lmfwt\" (UID: \"2103450f-178f-4fe4-be81-451ab8c0d111\") " pod="calico-system/goldmane-666569f655-lmfwt" Dec 13 00:26:42.482615 systemd[1]: Created slice kubepods-besteffort-pod1cc720b4_f00a_45c7_ad27_5d1040c88fe5.slice - libcontainer container kubepods-besteffort-pod1cc720b4_f00a_45c7_ad27_5d1040c88fe5.slice. Dec 13 00:26:42.487938 systemd[1]: Created slice kubepods-besteffort-podbaf0c6ad_ada1_4a24_b663_b32f96db48d0.slice - libcontainer container kubepods-besteffort-podbaf0c6ad_ada1_4a24_b663_b32f96db48d0.slice. Dec 13 00:26:42.494499 systemd[1]: Created slice kubepods-burstable-pod2a989c1a_b7fa_40df_8d15_8ca04e746459.slice - libcontainer container kubepods-burstable-pod2a989c1a_b7fa_40df_8d15_8ca04e746459.slice. Dec 13 00:26:42.508963 systemd[1]: Created slice kubepods-besteffort-poda727a70e_6988_4a99_89fe_438d343667ab.slice - libcontainer container kubepods-besteffort-poda727a70e_6988_4a99_89fe_438d343667ab.slice. Dec 13 00:26:42.582239 kubelet[2802]: I1213 00:26:42.581913 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a727a70e-6988-4a99-89fe-438d343667ab-calico-apiserver-certs\") pod \"calico-apiserver-7544896fd5-8pbbj\" (UID: \"a727a70e-6988-4a99-89fe-438d343667ab\") " pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" Dec 13 00:26:42.582239 kubelet[2802]: I1213 00:26:42.582057 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvrp7\" (UniqueName: \"kubernetes.io/projected/a727a70e-6988-4a99-89fe-438d343667ab-kube-api-access-wvrp7\") pod \"calico-apiserver-7544896fd5-8pbbj\" (UID: \"a727a70e-6988-4a99-89fe-438d343667ab\") " pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" Dec 13 00:26:42.758561 containerd[1623]: time="2025-12-13T00:26:42.758412050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c75f97b5d-hg69l,Uid:d0ccec5a-3eb2-4766-8606-0b092ddbbee0,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:42.766628 containerd[1623]: time="2025-12-13T00:26:42.766550291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lmfwt,Uid:2103450f-178f-4fe4-be81-451ab8c0d111,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:42.777583 kubelet[2802]: E1213 00:26:42.777535 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.778603 containerd[1623]: time="2025-12-13T00:26:42.778557559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2xd67,Uid:8b50cafb-741f-496b-9d47-9978f4044509,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:42.785949 containerd[1623]: time="2025-12-13T00:26:42.785905788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-679b4bb9cf-2wh8x,Uid:1cc720b4-f00a-45c7-ad27-5d1040c88fe5,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:42.792739 containerd[1623]: time="2025-12-13T00:26:42.792687816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7544896fd5-wwf2v,Uid:baf0c6ad-ada1-4a24-b663-b32f96db48d0,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:42.798424 kubelet[2802]: E1213 00:26:42.798363 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.800930 containerd[1623]: time="2025-12-13T00:26:42.800879478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v228q,Uid:2a989c1a-b7fa-40df-8d15-8ca04e746459,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:42.819907 containerd[1623]: time="2025-12-13T00:26:42.819837169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7544896fd5-8pbbj,Uid:a727a70e-6988-4a99-89fe-438d343667ab,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:42.878253 kubelet[2802]: E1213 00:26:42.878224 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:42.879987 containerd[1623]: time="2025-12-13T00:26:42.879949387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 13 00:26:42.934334 containerd[1623]: time="2025-12-13T00:26:42.934273058Z" level=error msg="Failed to destroy network for sandbox \"b45fd69fcb5a5a037629a4b69124c8af9da1fece4a829d9a6a93d19d8aa93b82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.939162 systemd[1]: run-netns-cni\x2dbd79c4f3\x2d31bc\x2d6c02\x2d88df\x2de88e7b9a6f37.mount: Deactivated successfully. Dec 13 00:26:42.945491 containerd[1623]: time="2025-12-13T00:26:42.943376069Z" level=error msg="Failed to destroy network for sandbox \"c1ad10c7ffcb1dda86a0274a1a1ca4c75105595e17600f0cfd941f8eb8382071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.945632 containerd[1623]: time="2025-12-13T00:26:42.945514509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c75f97b5d-hg69l,Uid:d0ccec5a-3eb2-4766-8606-0b092ddbbee0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45fd69fcb5a5a037629a4b69124c8af9da1fece4a829d9a6a93d19d8aa93b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.946132 kubelet[2802]: E1213 00:26:42.945852 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45fd69fcb5a5a037629a4b69124c8af9da1fece4a829d9a6a93d19d8aa93b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.946132 kubelet[2802]: E1213 00:26:42.945950 2802 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45fd69fcb5a5a037629a4b69124c8af9da1fece4a829d9a6a93d19d8aa93b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c75f97b5d-hg69l" Dec 13 00:26:42.946132 kubelet[2802]: E1213 00:26:42.945982 2802 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b45fd69fcb5a5a037629a4b69124c8af9da1fece4a829d9a6a93d19d8aa93b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c75f97b5d-hg69l" Dec 13 00:26:42.946429 kubelet[2802]: E1213 00:26:42.946063 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c75f97b5d-hg69l_calico-system(d0ccec5a-3eb2-4766-8606-0b092ddbbee0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c75f97b5d-hg69l_calico-system(d0ccec5a-3eb2-4766-8606-0b092ddbbee0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b45fd69fcb5a5a037629a4b69124c8af9da1fece4a829d9a6a93d19d8aa93b82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c75f97b5d-hg69l" podUID="d0ccec5a-3eb2-4766-8606-0b092ddbbee0" Dec 13 00:26:42.949007 systemd[1]: run-netns-cni\x2df2fd6d90\x2d1d18\x2dd714\x2de9d5\x2d3caaa627adc1.mount: Deactivated successfully. Dec 13 00:26:42.956241 containerd[1623]: time="2025-12-13T00:26:42.956172947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v228q,Uid:2a989c1a-b7fa-40df-8d15-8ca04e746459,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ad10c7ffcb1dda86a0274a1a1ca4c75105595e17600f0cfd941f8eb8382071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.956551 kubelet[2802]: E1213 00:26:42.956494 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ad10c7ffcb1dda86a0274a1a1ca4c75105595e17600f0cfd941f8eb8382071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.956629 kubelet[2802]: E1213 00:26:42.956578 2802 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ad10c7ffcb1dda86a0274a1a1ca4c75105595e17600f0cfd941f8eb8382071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v228q" Dec 13 00:26:42.956629 kubelet[2802]: E1213 00:26:42.956602 2802 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ad10c7ffcb1dda86a0274a1a1ca4c75105595e17600f0cfd941f8eb8382071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v228q" Dec 13 00:26:42.956709 kubelet[2802]: E1213 00:26:42.956654 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v228q_kube-system(2a989c1a-b7fa-40df-8d15-8ca04e746459)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v228q_kube-system(2a989c1a-b7fa-40df-8d15-8ca04e746459)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1ad10c7ffcb1dda86a0274a1a1ca4c75105595e17600f0cfd941f8eb8382071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v228q" podUID="2a989c1a-b7fa-40df-8d15-8ca04e746459" Dec 13 00:26:42.960624 containerd[1623]: time="2025-12-13T00:26:42.960566325Z" level=error msg="Failed to destroy network for sandbox \"1117a9f739e1bb60ca60d29231f7aa2417de575749423fb0143072f26e088b65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.961245 containerd[1623]: time="2025-12-13T00:26:42.961218198Z" level=error msg="Failed to destroy network for sandbox \"1324a00c8f618a1f0c27c43080d2545cced015ddd57c5d7d2f0c92e3fd7ed0fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.965071 systemd[1]: run-netns-cni\x2d0e6866b3\x2db481\x2d9d27\x2dc8d6\x2dbbe417d3e105.mount: Deactivated successfully. Dec 13 00:26:42.965230 systemd[1]: run-netns-cni\x2d99f38cf0\x2de93f\x2dfc04\x2d8bd9\x2d991d1bc60725.mount: Deactivated successfully. Dec 13 00:26:42.969757 containerd[1623]: time="2025-12-13T00:26:42.969640753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7544896fd5-wwf2v,Uid:baf0c6ad-ada1-4a24-b663-b32f96db48d0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1117a9f739e1bb60ca60d29231f7aa2417de575749423fb0143072f26e088b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.970673 kubelet[2802]: E1213 00:26:42.970194 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1117a9f739e1bb60ca60d29231f7aa2417de575749423fb0143072f26e088b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.970673 kubelet[2802]: E1213 00:26:42.970268 2802 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1117a9f739e1bb60ca60d29231f7aa2417de575749423fb0143072f26e088b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" Dec 13 00:26:42.970673 kubelet[2802]: E1213 00:26:42.970298 2802 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1117a9f739e1bb60ca60d29231f7aa2417de575749423fb0143072f26e088b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" Dec 13 00:26:42.970858 kubelet[2802]: E1213 00:26:42.970360 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7544896fd5-wwf2v_calico-apiserver(baf0c6ad-ada1-4a24-b663-b32f96db48d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7544896fd5-wwf2v_calico-apiserver(baf0c6ad-ada1-4a24-b663-b32f96db48d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1117a9f739e1bb60ca60d29231f7aa2417de575749423fb0143072f26e088b65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0" Dec 13 00:26:42.976671 containerd[1623]: time="2025-12-13T00:26:42.976543888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-679b4bb9cf-2wh8x,Uid:1cc720b4-f00a-45c7-ad27-5d1040c88fe5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1324a00c8f618a1f0c27c43080d2545cced015ddd57c5d7d2f0c92e3fd7ed0fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.977180 kubelet[2802]: E1213 00:26:42.977115 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1324a00c8f618a1f0c27c43080d2545cced015ddd57c5d7d2f0c92e3fd7ed0fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.977449 kubelet[2802]: E1213 00:26:42.977336 2802 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1324a00c8f618a1f0c27c43080d2545cced015ddd57c5d7d2f0c92e3fd7ed0fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" Dec 13 00:26:42.977571 kubelet[2802]: E1213 00:26:42.977527 2802 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1324a00c8f618a1f0c27c43080d2545cced015ddd57c5d7d2f0c92e3fd7ed0fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" Dec 13 00:26:42.977820 kubelet[2802]: E1213 00:26:42.977776 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-679b4bb9cf-2wh8x_calico-system(1cc720b4-f00a-45c7-ad27-5d1040c88fe5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-679b4bb9cf-2wh8x_calico-system(1cc720b4-f00a-45c7-ad27-5d1040c88fe5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1324a00c8f618a1f0c27c43080d2545cced015ddd57c5d7d2f0c92e3fd7ed0fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" podUID="1cc720b4-f00a-45c7-ad27-5d1040c88fe5" Dec 13 00:26:42.981737 containerd[1623]: time="2025-12-13T00:26:42.981637861Z" level=error msg="Failed to destroy network for sandbox \"3ebc8b26901209915511805b52ab0632bb693b6d84403dbdf7f2499d83cefc19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.985626 containerd[1623]: time="2025-12-13T00:26:42.985540970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2xd67,Uid:8b50cafb-741f-496b-9d47-9978f4044509,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebc8b26901209915511805b52ab0632bb693b6d84403dbdf7f2499d83cefc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.986076 kubelet[2802]: E1213 00:26:42.986021 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebc8b26901209915511805b52ab0632bb693b6d84403dbdf7f2499d83cefc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.986205 kubelet[2802]: E1213 00:26:42.986106 2802 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebc8b26901209915511805b52ab0632bb693b6d84403dbdf7f2499d83cefc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2xd67" Dec 13 00:26:42.986205 kubelet[2802]: E1213 00:26:42.986129 2802 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebc8b26901209915511805b52ab0632bb693b6d84403dbdf7f2499d83cefc19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2xd67" Dec 13 00:26:42.986541 kubelet[2802]: E1213 00:26:42.986489 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2xd67_kube-system(8b50cafb-741f-496b-9d47-9978f4044509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2xd67_kube-system(8b50cafb-741f-496b-9d47-9978f4044509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ebc8b26901209915511805b52ab0632bb693b6d84403dbdf7f2499d83cefc19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2xd67" podUID="8b50cafb-741f-496b-9d47-9978f4044509" Dec 13 00:26:42.990667 containerd[1623]: time="2025-12-13T00:26:42.990544233Z" level=error msg="Failed to destroy network for sandbox \"6231615025b8de0e2fd84346f5fd1b03fcf7588b3e47dede3507fe4fe50482cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.992104 containerd[1623]: time="2025-12-13T00:26:42.992041040Z" level=error msg="Failed to destroy network for sandbox \"d6d2f5979a4cb332b9df5d24635e35ef429021e9b929e5ddc6bca26b73746dbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.995242 containerd[1623]: time="2025-12-13T00:26:42.995184996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7544896fd5-8pbbj,Uid:a727a70e-6988-4a99-89fe-438d343667ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6231615025b8de0e2fd84346f5fd1b03fcf7588b3e47dede3507fe4fe50482cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.995598 kubelet[2802]: E1213 00:26:42.995552 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6231615025b8de0e2fd84346f5fd1b03fcf7588b3e47dede3507fe4fe50482cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.995686 kubelet[2802]: E1213 00:26:42.995625 2802 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6231615025b8de0e2fd84346f5fd1b03fcf7588b3e47dede3507fe4fe50482cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" Dec 13 00:26:42.995686 kubelet[2802]: E1213 00:26:42.995654 2802 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6231615025b8de0e2fd84346f5fd1b03fcf7588b3e47dede3507fe4fe50482cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" Dec 13 00:26:42.995756 kubelet[2802]: E1213 00:26:42.995723 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7544896fd5-8pbbj_calico-apiserver(a727a70e-6988-4a99-89fe-438d343667ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7544896fd5-8pbbj_calico-apiserver(a727a70e-6988-4a99-89fe-438d343667ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6231615025b8de0e2fd84346f5fd1b03fcf7588b3e47dede3507fe4fe50482cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" podUID="a727a70e-6988-4a99-89fe-438d343667ab" Dec 13 00:26:42.998394 containerd[1623]: time="2025-12-13T00:26:42.998337386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lmfwt,Uid:2103450f-178f-4fe4-be81-451ab8c0d111,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6d2f5979a4cb332b9df5d24635e35ef429021e9b929e5ddc6bca26b73746dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.998652 kubelet[2802]: E1213 00:26:42.998580 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6d2f5979a4cb332b9df5d24635e35ef429021e9b929e5ddc6bca26b73746dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:42.998727 kubelet[2802]: E1213 00:26:42.998651 2802 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6d2f5979a4cb332b9df5d24635e35ef429021e9b929e5ddc6bca26b73746dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lmfwt" Dec 13 00:26:42.998727 kubelet[2802]: E1213 00:26:42.998675 2802 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6d2f5979a4cb332b9df5d24635e35ef429021e9b929e5ddc6bca26b73746dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lmfwt" Dec 13 00:26:42.998804 kubelet[2802]: E1213 00:26:42.998733 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lmfwt_calico-system(2103450f-178f-4fe4-be81-451ab8c0d111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lmfwt_calico-system(2103450f-178f-4fe4-be81-451ab8c0d111)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6d2f5979a4cb332b9df5d24635e35ef429021e9b929e5ddc6bca26b73746dbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lmfwt" podUID="2103450f-178f-4fe4-be81-451ab8c0d111" Dec 13 00:26:43.524419 systemd[1]: Created slice kubepods-besteffort-pod0f1e7d1a_39ba_40d7_9b6b_6f10c141ab77.slice - libcontainer container kubepods-besteffort-pod0f1e7d1a_39ba_40d7_9b6b_6f10c141ab77.slice. Dec 13 00:26:43.526994 containerd[1623]: time="2025-12-13T00:26:43.526952921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hxl5n,Uid:0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:43.747161 containerd[1623]: time="2025-12-13T00:26:43.747098577Z" level=error msg="Failed to destroy network for sandbox \"0084e1974711e297ea990d5b283a108637e56af264855197d623d19b1a28e451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:43.833914 containerd[1623]: time="2025-12-13T00:26:43.833841664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hxl5n,Uid:0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0084e1974711e297ea990d5b283a108637e56af264855197d623d19b1a28e451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:43.834453 kubelet[2802]: E1213 00:26:43.834135 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0084e1974711e297ea990d5b283a108637e56af264855197d623d19b1a28e451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:43.834453 kubelet[2802]: E1213 00:26:43.834208 2802 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0084e1974711e297ea990d5b283a108637e56af264855197d623d19b1a28e451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hxl5n" Dec 13 00:26:43.834453 kubelet[2802]: E1213 00:26:43.834234 2802 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0084e1974711e297ea990d5b283a108637e56af264855197d623d19b1a28e451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hxl5n" Dec 13 00:26:43.834858 kubelet[2802]: E1213 00:26:43.834293 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hxl5n_calico-system(0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hxl5n_calico-system(0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0084e1974711e297ea990d5b283a108637e56af264855197d623d19b1a28e451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:26:43.859290 systemd[1]: run-netns-cni\x2d7721ff5e\x2d3c5b\x2d5fb1\x2ddc0a\x2d8586769490e6.mount: Deactivated successfully. Dec 13 00:26:43.859429 systemd[1]: run-netns-cni\x2da84e7363\x2d5d25\x2d8b25\x2db96f\x2d90ad1fb7d1e2.mount: Deactivated successfully. Dec 13 00:26:43.859530 systemd[1]: run-netns-cni\x2da11ced32\x2df65b\x2d83fc\x2da7ce\x2db442381736b9.mount: Deactivated successfully. Dec 13 00:26:49.132809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601580587.mount: Deactivated successfully. Dec 13 00:26:51.649155 containerd[1623]: time="2025-12-13T00:26:51.649040443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:51.653621 containerd[1623]: time="2025-12-13T00:26:51.653557953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880766" Dec 13 00:26:51.661477 containerd[1623]: time="2025-12-13T00:26:51.661245941Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:51.685469 containerd[1623]: time="2025-12-13T00:26:51.685397703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:51.686174 containerd[1623]: time="2025-12-13T00:26:51.686116742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.806120517s" Dec 13 00:26:51.686174 containerd[1623]: time="2025-12-13T00:26:51.686170302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 13 00:26:51.812227 containerd[1623]: time="2025-12-13T00:26:51.812178077Z" level=info msg="CreateContainer within sandbox \"4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 00:26:51.887287 containerd[1623]: time="2025-12-13T00:26:51.884927926Z" level=info msg="Container 2a50d25e86568bd0aff6f05ddebbbdfaa859c6f02d00b4178041a4891cdf62d6: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:51.912808 containerd[1623]: time="2025-12-13T00:26:51.912633662Z" level=info msg="CreateContainer within sandbox \"4863b1ffa383771e89afe72666864f7eb056b0ac1596c2def22fe092c7ebbc93\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2a50d25e86568bd0aff6f05ddebbbdfaa859c6f02d00b4178041a4891cdf62d6\"" Dec 13 00:26:51.914270 containerd[1623]: time="2025-12-13T00:26:51.914244092Z" level=info msg="StartContainer for \"2a50d25e86568bd0aff6f05ddebbbdfaa859c6f02d00b4178041a4891cdf62d6\"" Dec 13 00:26:51.916456 containerd[1623]: time="2025-12-13T00:26:51.916411938Z" level=info msg="connecting to shim 2a50d25e86568bd0aff6f05ddebbbdfaa859c6f02d00b4178041a4891cdf62d6" address="unix:///run/containerd/s/a526cea46d62670bd4466cc60c0623a303050088a368bb90e3741a93e85964a9" protocol=ttrpc version=3 Dec 13 00:26:51.943742 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 13 00:26:51.943884 kernel: audit: type=1130 audit(1765585611.934:578): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.117:22-10.0.0.1:33896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:51.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.117:22-10.0.0.1:33896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:51.936265 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:33896.service - OpenSSH per-connection server daemon (10.0.0.1:33896). Dec 13 00:26:51.967172 systemd[1]: Started cri-containerd-2a50d25e86568bd0aff6f05ddebbbdfaa859c6f02d00b4178041a4891cdf62d6.scope - libcontainer container 2a50d25e86568bd0aff6f05ddebbbdfaa859c6f02d00b4178041a4891cdf62d6. Dec 13 00:26:52.043000 audit: BPF prog-id=176 op=LOAD Dec 13 00:26:52.043000 audit[3820]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3337 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.054571 kernel: audit: type=1334 audit(1765585612.043:579): prog-id=176 op=LOAD Dec 13 00:26:52.054611 kernel: audit: type=1300 audit(1765585612.043:579): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3337 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.054749 kernel: audit: type=1327 audit(1765585612.043:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261353064323565383635363862643061666636663035646465626262 Dec 13 00:26:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261353064323565383635363862643061666636663035646465626262 Dec 13 00:26:52.043000 audit: BPF prog-id=177 op=LOAD Dec 13 00:26:52.064475 kernel: audit: type=1334 audit(1765585612.043:580): prog-id=177 op=LOAD Dec 13 00:26:52.043000 audit[3820]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3337 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.069556 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 33896 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:26:52.072413 kernel: audit: type=1300 audit(1765585612.043:580): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3337 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.072862 sshd-session[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:52.082419 kernel: audit: type=1327 audit(1765585612.043:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261353064323565383635363862643061666636663035646465626262 Dec 13 00:26:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261353064323565383635363862643061666636663035646465626262 Dec 13 00:26:52.082449 systemd-logind[1589]: New session 9 of user core. Dec 13 00:26:52.043000 audit: BPF prog-id=177 op=UNLOAD Dec 13 00:26:52.085516 kernel: audit: type=1334 audit(1765585612.043:581): prog-id=177 op=UNLOAD Dec 13 00:26:52.043000 audit[3820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261353064323565383635363862643061666636663035646465626262 Dec 13 00:26:52.097639 kernel: audit: type=1300 audit(1765585612.043:581): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.097706 kernel: audit: type=1327 audit(1765585612.043:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261353064323565383635363862643061666636663035646465626262 Dec 13 00:26:52.043000 audit: BPF prog-id=176 op=UNLOAD Dec 13 00:26:52.043000 audit[3820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261353064323565383635363862643061666636663035646465626262 Dec 13 00:26:52.043000 audit: BPF prog-id=178 op=LOAD Dec 13 00:26:52.043000 audit[3820]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3337 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261353064323565383635363862643061666636663035646465626262 Dec 13 00:26:52.063000 audit[3826]: USER_ACCT pid=3826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.068000 audit[3826]: CRED_ACQ pid=3826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.068000 audit[3826]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8c743eb0 a2=3 a3=0 items=0 ppid=1 pid=3826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.068000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:52.099276 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 00:26:52.105000 audit[3826]: USER_START pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.107596 containerd[1623]: time="2025-12-13T00:26:52.107012203Z" level=info msg="StartContainer for \"2a50d25e86568bd0aff6f05ddebbbdfaa859c6f02d00b4178041a4891cdf62d6\" returns successfully" Dec 13 00:26:52.108000 audit[3852]: CRED_ACQ pid=3852 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.209634 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 00:26:52.209871 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 00:26:52.916064 kubelet[2802]: E1213 00:26:52.915687 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:52.965528 sshd[3852]: Connection closed by 10.0.0.1 port 33896 Dec 13 00:26:52.966724 sshd-session[3826]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:52.968000 audit[3826]: USER_END pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.968000 audit[3826]: CRED_DISP pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.973905 kubelet[2802]: I1213 00:26:52.973846 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ds7r8" podStartSLOduration=2.30157133 podStartE2EDuration="29.973817235s" podCreationTimestamp="2025-12-13 00:26:23 +0000 UTC" firstStartedPulling="2025-12-13 00:26:24.014725339 +0000 UTC m=+23.593003843" lastFinishedPulling="2025-12-13 00:26:51.686971244 +0000 UTC m=+51.265249748" observedRunningTime="2025-12-13 00:26:52.97304113 +0000 UTC m=+52.551319634" watchObservedRunningTime="2025-12-13 00:26:52.973817235 +0000 UTC m=+52.552095739" Dec 13 00:26:52.974008 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:33896.service: Deactivated successfully. Dec 13 00:26:52.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.117:22-10.0.0.1:33896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:52.980498 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 00:26:52.983096 systemd-logind[1589]: Session 9 logged out. Waiting for processes to exit. Dec 13 00:26:52.984776 systemd-logind[1589]: Removed session 9. Dec 13 00:26:53.158901 kubelet[2802]: I1213 00:26:53.158810 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-whisker-backend-key-pair\") pod \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\" (UID: \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\") " Dec 13 00:26:53.158901 kubelet[2802]: I1213 00:26:53.158898 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-whisker-ca-bundle\") pod \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\" (UID: \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\") " Dec 13 00:26:53.158901 kubelet[2802]: I1213 00:26:53.158914 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6lbz\" (UniqueName: \"kubernetes.io/projected/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-kube-api-access-k6lbz\") pod \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\" (UID: \"d0ccec5a-3eb2-4766-8606-0b092ddbbee0\") " Dec 13 00:26:53.159413 kubelet[2802]: I1213 00:26:53.159361 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d0ccec5a-3eb2-4766-8606-0b092ddbbee0" (UID: "d0ccec5a-3eb2-4766-8606-0b092ddbbee0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 13 00:26:53.164428 systemd[1]: var-lib-kubelet-pods-d0ccec5a\x2d3eb2\x2d4766\x2d8606\x2d0b092ddbbee0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6lbz.mount: Deactivated successfully. Dec 13 00:26:53.165067 kubelet[2802]: I1213 00:26:53.165016 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d0ccec5a-3eb2-4766-8606-0b092ddbbee0" (UID: "d0ccec5a-3eb2-4766-8606-0b092ddbbee0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 13 00:26:53.166643 kubelet[2802]: I1213 00:26:53.166086 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-kube-api-access-k6lbz" (OuterVolumeSpecName: "kube-api-access-k6lbz") pod "d0ccec5a-3eb2-4766-8606-0b092ddbbee0" (UID: "d0ccec5a-3eb2-4766-8606-0b092ddbbee0"). InnerVolumeSpecName "kube-api-access-k6lbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 13 00:26:53.167367 systemd[1]: var-lib-kubelet-pods-d0ccec5a\x2d3eb2\x2d4766\x2d8606\x2d0b092ddbbee0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 13 00:26:53.259446 kubelet[2802]: I1213 00:26:53.259372 2802 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 13 00:26:53.259446 kubelet[2802]: I1213 00:26:53.259429 2802 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 00:26:53.259446 kubelet[2802]: I1213 00:26:53.259443 2802 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6lbz\" (UniqueName: \"kubernetes.io/projected/d0ccec5a-3eb2-4766-8606-0b092ddbbee0-kube-api-access-k6lbz\") on node \"localhost\" DevicePath \"\"" Dec 13 00:26:53.518596 kubelet[2802]: E1213 00:26:53.518259 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:53.519747 containerd[1623]: time="2025-12-13T00:26:53.519691553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2xd67,Uid:8b50cafb-741f-496b-9d47-9978f4044509,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:53.520171 containerd[1623]: time="2025-12-13T00:26:53.520092635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7544896fd5-wwf2v,Uid:baf0c6ad-ada1-4a24-b663-b32f96db48d0,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:53.726072 systemd-networkd[1314]: cali70cfc839f77: Link UP Dec 13 00:26:53.726918 systemd-networkd[1314]: cali70cfc839f77: Gained carrier Dec 13 00:26:53.740538 containerd[1623]: 2025-12-13 00:26:53.589 [INFO][3943] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:26:53.740538 containerd[1623]: 2025-12-13 00:26:53.608 [INFO][3943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2xd67-eth0 coredns-674b8bbfcf- kube-system 8b50cafb-741f-496b-9d47-9978f4044509 923 0 2025-12-13 00:26:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2xd67 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali70cfc839f77 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xd67" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xd67-" Dec 13 00:26:53.740538 containerd[1623]: 2025-12-13 00:26:53.608 [INFO][3943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xd67" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" Dec 13 00:26:53.740538 containerd[1623]: 2025-12-13 00:26:53.679 [INFO][3956] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" HandleID="k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Workload="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.680 [INFO][3956] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" HandleID="k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Workload="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2xd67", "timestamp":"2025-12-13 00:26:53.679964426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.680 [INFO][3956] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.680 [INFO][3956] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.681 [INFO][3956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.688 [INFO][3956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" host="localhost" Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.694 [INFO][3956] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.699 [INFO][3956] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.701 [INFO][3956] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.703 [INFO][3956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:53.740869 containerd[1623]: 2025-12-13 00:26:53.703 [INFO][3956] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" host="localhost" Dec 13 00:26:53.741166 containerd[1623]: 2025-12-13 00:26:53.704 [INFO][3956] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98 Dec 13 00:26:53.741166 containerd[1623]: 2025-12-13 00:26:53.708 [INFO][3956] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" host="localhost" Dec 13 00:26:53.741166 containerd[1623]: 2025-12-13 00:26:53.713 [INFO][3956] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" host="localhost" Dec 13 00:26:53.741166 containerd[1623]: 2025-12-13 00:26:53.713 [INFO][3956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" host="localhost" Dec 13 00:26:53.741166 containerd[1623]: 2025-12-13 00:26:53.714 [INFO][3956] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:53.741166 containerd[1623]: 2025-12-13 00:26:53.714 [INFO][3956] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" HandleID="k8s-pod-network.e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Workload="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" Dec 13 00:26:53.741339 containerd[1623]: 2025-12-13 00:26:53.717 [INFO][3943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xd67" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2xd67-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8b50cafb-741f-496b-9d47-9978f4044509", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2xd67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70cfc839f77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:53.741480 containerd[1623]: 2025-12-13 00:26:53.717 [INFO][3943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xd67" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" Dec 13 00:26:53.741480 containerd[1623]: 2025-12-13 00:26:53.717 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70cfc839f77 ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xd67" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" Dec 13 00:26:53.741480 containerd[1623]: 2025-12-13 00:26:53.726 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xd67" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" Dec 13 00:26:53.741588 containerd[1623]: 2025-12-13 00:26:53.727 [INFO][3943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xd67" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2xd67-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8b50cafb-741f-496b-9d47-9978f4044509", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98", Pod:"coredns-674b8bbfcf-2xd67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70cfc839f77", MAC:"36:e3:de:f3:c1:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:53.741588 containerd[1623]: 2025-12-13 00:26:53.735 [INFO][3943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xd67" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xd67-eth0" Dec 13 00:26:53.834850 containerd[1623]: time="2025-12-13T00:26:53.834794993Z" level=info msg="connecting to shim e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98" address="unix:///run/containerd/s/e3d4487f58b6b6dc28d71b528e494bc4b4dbf9d01cbb831f7cc8109632b0b13e" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:53.835902 systemd-networkd[1314]: caliaf844692d8a: Link UP Dec 13 00:26:53.836631 systemd-networkd[1314]: caliaf844692d8a: Gained carrier Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.587 [INFO][3926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.608 [INFO][3926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0 calico-apiserver-7544896fd5- calico-apiserver baf0c6ad-ada1-4a24-b663-b32f96db48d0 925 0 2025-12-13 00:26:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7544896fd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7544896fd5-wwf2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaf844692d8a [] [] }} ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-wwf2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.608 [INFO][3926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-wwf2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.679 [INFO][3957] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" HandleID="k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Workload="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.680 [INFO][3957] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" HandleID="k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Workload="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050ad00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7544896fd5-wwf2v", "timestamp":"2025-12-13 00:26:53.679585746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.680 [INFO][3957] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.714 [INFO][3957] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.714 [INFO][3957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.792 [INFO][3957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.796 [INFO][3957] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.801 [INFO][3957] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.804 [INFO][3957] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.806 [INFO][3957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.806 [INFO][3957] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.808 [INFO][3957] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091 Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.814 [INFO][3957] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.823 [INFO][3957] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.823 [INFO][3957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" host="localhost" Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.823 [INFO][3957] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:53.851268 containerd[1623]: 2025-12-13 00:26:53.823 [INFO][3957] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" HandleID="k8s-pod-network.3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Workload="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" Dec 13 00:26:53.851868 containerd[1623]: 2025-12-13 00:26:53.828 [INFO][3926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-wwf2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0", GenerateName:"calico-apiserver-7544896fd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"baf0c6ad-ada1-4a24-b663-b32f96db48d0", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7544896fd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7544896fd5-wwf2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf844692d8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:53.851868 containerd[1623]: 2025-12-13 00:26:53.828 [INFO][3926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-wwf2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" Dec 13 00:26:53.851868 containerd[1623]: 2025-12-13 00:26:53.828 [INFO][3926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf844692d8a ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-wwf2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" Dec 13 00:26:53.851868 containerd[1623]: 2025-12-13 00:26:53.836 [INFO][3926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-wwf2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" Dec 13 00:26:53.851868 containerd[1623]: 2025-12-13 00:26:53.837 [INFO][3926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-wwf2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0", GenerateName:"calico-apiserver-7544896fd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"baf0c6ad-ada1-4a24-b663-b32f96db48d0", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7544896fd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091", Pod:"calico-apiserver-7544896fd5-wwf2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf844692d8a", MAC:"8e:10:d5:9b:7b:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:53.851868 containerd[1623]: 2025-12-13 00:26:53.848 [INFO][3926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-wwf2v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--wwf2v-eth0" Dec 13 00:26:53.874728 systemd[1]: Started cri-containerd-e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98.scope - libcontainer container e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98. Dec 13 00:26:53.876679 containerd[1623]: time="2025-12-13T00:26:53.876632301Z" level=info msg="connecting to shim 3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091" address="unix:///run/containerd/s/879ac65dda04dab54acb7378b364b8e8f0df425ba1fd8f8eaf0c35dc07dae56b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:53.887000 audit: BPF prog-id=179 op=LOAD Dec 13 00:26:53.888000 audit: BPF prog-id=180 op=LOAD Dec 13 00:26:53.888000 audit[3999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3987 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534666264613363646362336438646335386635366232643233366232 Dec 13 00:26:53.888000 audit: BPF prog-id=180 op=UNLOAD Dec 13 00:26:53.888000 audit[3999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534666264613363646362336438646335386635366232643233366232 Dec 13 00:26:53.888000 audit: BPF prog-id=181 op=LOAD Dec 13 00:26:53.888000 audit[3999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3987 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534666264613363646362336438646335386635366232643233366232 Dec 13 00:26:53.888000 audit: BPF prog-id=182 op=LOAD Dec 13 00:26:53.888000 audit[3999]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3987 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534666264613363646362336438646335386635366232643233366232 Dec 13 00:26:53.888000 audit: BPF prog-id=182 op=UNLOAD Dec 13 00:26:53.888000 audit[3999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534666264613363646362336438646335386635366232643233366232 Dec 13 00:26:53.888000 audit: BPF prog-id=181 op=UNLOAD Dec 13 00:26:53.888000 audit[3999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534666264613363646362336438646335386635366232643233366232 Dec 13 00:26:53.888000 audit: BPF prog-id=183 op=LOAD Dec 13 00:26:53.888000 audit[3999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3987 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534666264613363646362336438646335386635366232643233366232 Dec 13 00:26:53.890953 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:53.903748 systemd[1]: Started cri-containerd-3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091.scope - libcontainer container 3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091. Dec 13 00:26:53.916629 kubelet[2802]: E1213 00:26:53.916196 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:53.920000 audit: BPF prog-id=184 op=LOAD Dec 13 00:26:53.921000 audit: BPF prog-id=185 op=LOAD Dec 13 00:26:53.921000 audit[4042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4024 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363361386431643163343661363937363563376461313535353266 Dec 13 00:26:53.921000 audit: BPF prog-id=185 op=UNLOAD Dec 13 00:26:53.921000 audit[4042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4024 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363361386431643163343661363937363563376461313535353266 Dec 13 00:26:53.921000 audit: BPF prog-id=186 op=LOAD Dec 13 00:26:53.921000 audit[4042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4024 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363361386431643163343661363937363563376461313535353266 Dec 13 00:26:53.921000 audit: BPF prog-id=187 op=LOAD Dec 13 00:26:53.921000 audit[4042]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4024 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363361386431643163343661363937363563376461313535353266 Dec 13 00:26:53.921000 audit: BPF prog-id=187 op=UNLOAD Dec 13 00:26:53.921000 audit[4042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4024 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363361386431643163343661363937363563376461313535353266 Dec 13 00:26:53.922000 audit: BPF prog-id=186 op=UNLOAD Dec 13 00:26:53.922000 audit[4042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4024 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363361386431643163343661363937363563376461313535353266 Dec 13 00:26:53.922000 audit: BPF prog-id=188 op=LOAD Dec 13 00:26:53.922000 audit[4042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4024 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363361386431643163343661363937363563376461313535353266 Dec 13 00:26:53.925364 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:53.929174 systemd[1]: Removed slice kubepods-besteffort-podd0ccec5a_3eb2_4766_8606_0b092ddbbee0.slice - libcontainer container kubepods-besteffort-podd0ccec5a_3eb2_4766_8606_0b092ddbbee0.slice. Dec 13 00:26:53.966665 containerd[1623]: time="2025-12-13T00:26:53.966606010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2xd67,Uid:8b50cafb-741f-496b-9d47-9978f4044509,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98\"" Dec 13 00:26:53.968645 kubelet[2802]: E1213 00:26:53.968589 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:53.980619 containerd[1623]: time="2025-12-13T00:26:53.980564046Z" level=info msg="CreateContainer within sandbox \"e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:26:53.982885 containerd[1623]: time="2025-12-13T00:26:53.982820898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7544896fd5-wwf2v,Uid:baf0c6ad-ada1-4a24-b663-b32f96db48d0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3e63a8d1d1c46a69765c7da15552faf1ed76cda703d934554a3c790753b34091\"" Dec 13 00:26:53.987256 containerd[1623]: time="2025-12-13T00:26:53.987226420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:26:54.000770 containerd[1623]: time="2025-12-13T00:26:54.000709795Z" level=info msg="Container 1b1fc5e1029616b49c6a8a9f1c802b35b7939f4bd5259f5f276ebaddddcf7c14: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:54.011544 containerd[1623]: time="2025-12-13T00:26:54.011479663Z" level=info msg="CreateContainer within sandbox \"e4fbda3cdcb3d8dc58f56b2d236b2728ef7c29745e16155d8822930dba9ffc98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b1fc5e1029616b49c6a8a9f1c802b35b7939f4bd5259f5f276ebaddddcf7c14\"" Dec 13 00:26:54.013695 containerd[1623]: time="2025-12-13T00:26:54.013658328Z" level=info msg="StartContainer for \"1b1fc5e1029616b49c6a8a9f1c802b35b7939f4bd5259f5f276ebaddddcf7c14\"" Dec 13 00:26:54.015409 containerd[1623]: time="2025-12-13T00:26:54.014884117Z" level=info msg="connecting to shim 1b1fc5e1029616b49c6a8a9f1c802b35b7939f4bd5259f5f276ebaddddcf7c14" address="unix:///run/containerd/s/e3d4487f58b6b6dc28d71b528e494bc4b4dbf9d01cbb831f7cc8109632b0b13e" protocol=ttrpc version=3 Dec 13 00:26:54.025788 systemd[1]: Created slice kubepods-besteffort-pod331b009d_3266_4003_85e6_8aaabc469f22.slice - libcontainer container kubepods-besteffort-pod331b009d_3266_4003_85e6_8aaabc469f22.slice. Dec 13 00:26:54.042784 systemd[1]: Started cri-containerd-1b1fc5e1029616b49c6a8a9f1c802b35b7939f4bd5259f5f276ebaddddcf7c14.scope - libcontainer container 1b1fc5e1029616b49c6a8a9f1c802b35b7939f4bd5259f5f276ebaddddcf7c14. Dec 13 00:26:54.057000 audit: BPF prog-id=189 op=LOAD Dec 13 00:26:54.058000 audit: BPF prog-id=190 op=LOAD Dec 13 00:26:54.058000 audit[4098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3987 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162316663356531303239363136623439633661386139663163383032 Dec 13 00:26:54.058000 audit: BPF prog-id=190 op=UNLOAD Dec 13 00:26:54.058000 audit[4098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162316663356531303239363136623439633661386139663163383032 Dec 13 00:26:54.058000 audit: BPF prog-id=191 op=LOAD Dec 13 00:26:54.058000 audit[4098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3987 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162316663356531303239363136623439633661386139663163383032 Dec 13 00:26:54.058000 audit: BPF prog-id=192 op=LOAD Dec 13 00:26:54.058000 audit[4098]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3987 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162316663356531303239363136623439633661386139663163383032 Dec 13 00:26:54.058000 audit: BPF prog-id=192 op=UNLOAD Dec 13 00:26:54.058000 audit[4098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162316663356531303239363136623439633661386139663163383032 Dec 13 00:26:54.058000 audit: BPF prog-id=191 op=UNLOAD Dec 13 00:26:54.058000 audit[4098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3987 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162316663356531303239363136623439633661386139663163383032 Dec 13 00:26:54.058000 audit: BPF prog-id=193 op=LOAD Dec 13 00:26:54.058000 audit[4098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3987 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162316663356531303239363136623439633661386139663163383032 Dec 13 00:26:54.066444 kubelet[2802]: I1213 00:26:54.066345 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/331b009d-3266-4003-85e6-8aaabc469f22-whisker-backend-key-pair\") pod \"whisker-754d866664-bf6vf\" (UID: \"331b009d-3266-4003-85e6-8aaabc469f22\") " pod="calico-system/whisker-754d866664-bf6vf" Dec 13 00:26:54.066776 kubelet[2802]: I1213 00:26:54.066662 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/331b009d-3266-4003-85e6-8aaabc469f22-whisker-ca-bundle\") pod \"whisker-754d866664-bf6vf\" (UID: \"331b009d-3266-4003-85e6-8aaabc469f22\") " pod="calico-system/whisker-754d866664-bf6vf" Dec 13 00:26:54.067065 kubelet[2802]: I1213 00:26:54.067001 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkdd\" (UniqueName: \"kubernetes.io/projected/331b009d-3266-4003-85e6-8aaabc469f22-kube-api-access-xdkdd\") pod \"whisker-754d866664-bf6vf\" (UID: \"331b009d-3266-4003-85e6-8aaabc469f22\") " pod="calico-system/whisker-754d866664-bf6vf" Dec 13 00:26:54.080642 containerd[1623]: time="2025-12-13T00:26:54.080599734Z" level=info msg="StartContainer for \"1b1fc5e1029616b49c6a8a9f1c802b35b7939f4bd5259f5f276ebaddddcf7c14\" returns successfully" Dec 13 00:26:54.301300 containerd[1623]: time="2025-12-13T00:26:54.301222022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:54.308397 containerd[1623]: time="2025-12-13T00:26:54.308317418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:26:54.308573 containerd[1623]: time="2025-12-13T00:26:54.308408850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:54.308778 kubelet[2802]: E1213 00:26:54.308698 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:54.308955 kubelet[2802]: E1213 00:26:54.308805 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:54.313760 kubelet[2802]: E1213 00:26:54.313637 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2szwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7544896fd5-wwf2v_calico-apiserver(baf0c6ad-ada1-4a24-b663-b32f96db48d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:54.314932 kubelet[2802]: E1213 00:26:54.314884 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0" Dec 13 00:26:54.336169 containerd[1623]: time="2025-12-13T00:26:54.336112353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754d866664-bf6vf,Uid:331b009d-3266-4003-85e6-8aaabc469f22,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:54.481756 systemd-networkd[1314]: cali26ff4856297: Link UP Dec 13 00:26:54.482539 systemd-networkd[1314]: cali26ff4856297: Gained carrier Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.364 [INFO][4137] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.374 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--754d866664--bf6vf-eth0 whisker-754d866664- calico-system 331b009d-3266-4003-85e6-8aaabc469f22 1049 0 2025-12-13 00:26:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:754d866664 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-754d866664-bf6vf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali26ff4856297 [] [] }} ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Namespace="calico-system" Pod="whisker-754d866664-bf6vf" WorkloadEndpoint="localhost-k8s-whisker--754d866664--bf6vf-" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.374 [INFO][4137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Namespace="calico-system" Pod="whisker-754d866664-bf6vf" WorkloadEndpoint="localhost-k8s-whisker--754d866664--bf6vf-eth0" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.405 [INFO][4152] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" HandleID="k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Workload="localhost-k8s-whisker--754d866664--bf6vf-eth0" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.405 [INFO][4152] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" HandleID="k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Workload="localhost-k8s-whisker--754d866664--bf6vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-754d866664-bf6vf", "timestamp":"2025-12-13 00:26:54.405317965 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.405 [INFO][4152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.405 [INFO][4152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.405 [INFO][4152] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.430 [INFO][4152] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.436 [INFO][4152] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.441 [INFO][4152] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.443 [INFO][4152] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.446 [INFO][4152] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.446 [INFO][4152] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.447 [INFO][4152] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.454 [INFO][4152] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.472 [INFO][4152] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.472 [INFO][4152] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" host="localhost" Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.472 [INFO][4152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:54.492878 containerd[1623]: 2025-12-13 00:26:54.472 [INFO][4152] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" HandleID="k8s-pod-network.e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Workload="localhost-k8s-whisker--754d866664--bf6vf-eth0" Dec 13 00:26:54.493724 containerd[1623]: 2025-12-13 00:26:54.476 [INFO][4137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Namespace="calico-system" Pod="whisker-754d866664-bf6vf" WorkloadEndpoint="localhost-k8s-whisker--754d866664--bf6vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--754d866664--bf6vf-eth0", GenerateName:"whisker-754d866664-", Namespace:"calico-system", SelfLink:"", UID:"331b009d-3266-4003-85e6-8aaabc469f22", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"754d866664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-754d866664-bf6vf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26ff4856297", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:54.493724 containerd[1623]: 2025-12-13 00:26:54.476 [INFO][4137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Namespace="calico-system" Pod="whisker-754d866664-bf6vf" WorkloadEndpoint="localhost-k8s-whisker--754d866664--bf6vf-eth0" Dec 13 00:26:54.493724 containerd[1623]: 2025-12-13 00:26:54.476 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26ff4856297 ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Namespace="calico-system" Pod="whisker-754d866664-bf6vf" WorkloadEndpoint="localhost-k8s-whisker--754d866664--bf6vf-eth0" Dec 13 00:26:54.493724 containerd[1623]: 2025-12-13 00:26:54.478 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Namespace="calico-system" Pod="whisker-754d866664-bf6vf" WorkloadEndpoint="localhost-k8s-whisker--754d866664--bf6vf-eth0" Dec 13 00:26:54.493724 containerd[1623]: 2025-12-13 00:26:54.479 [INFO][4137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Namespace="calico-system" Pod="whisker-754d866664-bf6vf" WorkloadEndpoint="localhost-k8s-whisker--754d866664--bf6vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--754d866664--bf6vf-eth0", GenerateName:"whisker-754d866664-", Namespace:"calico-system", SelfLink:"", UID:"331b009d-3266-4003-85e6-8aaabc469f22", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"754d866664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba", Pod:"whisker-754d866664-bf6vf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26ff4856297", MAC:"86:42:47:c6:38:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:54.493724 containerd[1623]: 2025-12-13 00:26:54.488 [INFO][4137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" Namespace="calico-system" Pod="whisker-754d866664-bf6vf" WorkloadEndpoint="localhost-k8s-whisker--754d866664--bf6vf-eth0" Dec 13 00:26:54.518623 kubelet[2802]: E1213 00:26:54.518547 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:54.519045 containerd[1623]: time="2025-12-13T00:26:54.518697698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-679b4bb9cf-2wh8x,Uid:1cc720b4-f00a-45c7-ad27-5d1040c88fe5,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:54.519188 containerd[1623]: time="2025-12-13T00:26:54.519132654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v228q,Uid:2a989c1a-b7fa-40df-8d15-8ca04e746459,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:54.521424 kubelet[2802]: I1213 00:26:54.521341 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0ccec5a-3eb2-4766-8606-0b092ddbbee0" path="/var/lib/kubelet/pods/d0ccec5a-3eb2-4766-8606-0b092ddbbee0/volumes" Dec 13 00:26:54.526008 containerd[1623]: time="2025-12-13T00:26:54.525954197Z" level=info msg="connecting to shim e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba" address="unix:///run/containerd/s/401538b38d040c6f16d0dc76a6cf297a5e97531fe3016fa215d80db6e1b842b5" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:54.569975 systemd[1]: Started cri-containerd-e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba.scope - libcontainer container e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba. Dec 13 00:26:54.619000 audit: BPF prog-id=194 op=LOAD Dec 13 00:26:54.620000 audit: BPF prog-id=195 op=LOAD Dec 13 00:26:54.620000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4176 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535616131653830313764343836656362316434333334353939623764 Dec 13 00:26:54.620000 audit: BPF prog-id=195 op=UNLOAD Dec 13 00:26:54.620000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535616131653830313764343836656362316434333334353939623764 Dec 13 00:26:54.621000 audit: BPF prog-id=196 op=LOAD Dec 13 00:26:54.621000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4176 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535616131653830313764343836656362316434333334353939623764 Dec 13 00:26:54.622000 audit: BPF prog-id=197 op=LOAD Dec 13 00:26:54.622000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4176 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535616131653830313764343836656362316434333334353939623764 Dec 13 00:26:54.622000 audit: BPF prog-id=197 op=UNLOAD Dec 13 00:26:54.622000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535616131653830313764343836656362316434333334353939623764 Dec 13 00:26:54.622000 audit: BPF prog-id=196 op=UNLOAD Dec 13 00:26:54.622000 audit[4199]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535616131653830313764343836656362316434333334353939623764 Dec 13 00:26:54.622000 audit: BPF prog-id=198 op=LOAD Dec 13 00:26:54.622000 audit[4199]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4176 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535616131653830313764343836656362316434333334353939623764 Dec 13 00:26:54.626542 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:54.708857 containerd[1623]: time="2025-12-13T00:26:54.708796795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754d866664-bf6vf,Uid:331b009d-3266-4003-85e6-8aaabc469f22,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5aa1e8017d486ecb1d4334599b7db660677e1501c9b989038e792dda5e429ba\"" Dec 13 00:26:54.718684 containerd[1623]: time="2025-12-13T00:26:54.718639283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:26:54.748616 systemd-networkd[1314]: cali51e6e6f633c: Link UP Dec 13 00:26:54.749986 systemd-networkd[1314]: cali51e6e6f633c: Gained carrier Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.580 [INFO][4192] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.600 [INFO][4192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--v228q-eth0 coredns-674b8bbfcf- kube-system 2a989c1a-b7fa-40df-8d15-8ca04e746459 927 0 2025-12-13 00:26:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-v228q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali51e6e6f633c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Namespace="kube-system" Pod="coredns-674b8bbfcf-v228q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v228q-" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.601 [INFO][4192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Namespace="kube-system" Pod="coredns-674b8bbfcf-v228q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.656 [INFO][4285] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" HandleID="k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Workload="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.658 [INFO][4285] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" HandleID="k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Workload="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a6fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-v228q", "timestamp":"2025-12-13 00:26:54.656234614 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.658 [INFO][4285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.658 [INFO][4285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.658 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.666 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.681 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.694 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.698 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.705 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.706 [INFO][4285] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.712 [INFO][4285] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8 Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.725 [INFO][4285] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.734 [INFO][4285] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.734 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" host="localhost" Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.734 [INFO][4285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:54.774346 containerd[1623]: 2025-12-13 00:26:54.734 [INFO][4285] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" HandleID="k8s-pod-network.3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Workload="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" Dec 13 00:26:54.775287 containerd[1623]: 2025-12-13 00:26:54.742 [INFO][4192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Namespace="kube-system" Pod="coredns-674b8bbfcf-v228q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--v228q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2a989c1a-b7fa-40df-8d15-8ca04e746459", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-v228q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51e6e6f633c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:54.775287 containerd[1623]: 2025-12-13 00:26:54.743 [INFO][4192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Namespace="kube-system" Pod="coredns-674b8bbfcf-v228q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" Dec 13 00:26:54.775287 containerd[1623]: 2025-12-13 00:26:54.744 [INFO][4192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51e6e6f633c ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Namespace="kube-system" Pod="coredns-674b8bbfcf-v228q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" Dec 13 00:26:54.775287 containerd[1623]: 2025-12-13 00:26:54.750 [INFO][4192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Namespace="kube-system" Pod="coredns-674b8bbfcf-v228q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" Dec 13 00:26:54.775287 containerd[1623]: 2025-12-13 00:26:54.752 [INFO][4192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Namespace="kube-system" Pod="coredns-674b8bbfcf-v228q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--v228q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2a989c1a-b7fa-40df-8d15-8ca04e746459", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8", Pod:"coredns-674b8bbfcf-v228q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51e6e6f633c", MAC:"4a:12:bf:a5:16:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:54.775287 containerd[1623]: 2025-12-13 00:26:54.765 [INFO][4192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" Namespace="kube-system" Pod="coredns-674b8bbfcf-v228q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v228q-eth0" Dec 13 00:26:54.822698 systemd-networkd[1314]: calib1e219bc405: Link UP Dec 13 00:26:54.824486 systemd-networkd[1314]: calib1e219bc405: Gained carrier Dec 13 00:26:54.838546 containerd[1623]: time="2025-12-13T00:26:54.837860832Z" level=info msg="connecting to shim 3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8" address="unix:///run/containerd/s/a72e9986723bc17af841e6524eeddff77c208e4f73e38aee5e5ef85b5d403011" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.590 [INFO][4187] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.610 [INFO][4187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0 calico-kube-controllers-679b4bb9cf- calico-system 1cc720b4-f00a-45c7-ad27-5d1040c88fe5 924 0 2025-12-13 00:26:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:679b4bb9cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-679b4bb9cf-2wh8x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib1e219bc405 [] [] }} ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Namespace="calico-system" Pod="calico-kube-controllers-679b4bb9cf-2wh8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.610 [INFO][4187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Namespace="calico-system" Pod="calico-kube-controllers-679b4bb9cf-2wh8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.690 [INFO][4315] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" HandleID="k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Workload="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.692 [INFO][4315] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" HandleID="k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Workload="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-679b4bb9cf-2wh8x", "timestamp":"2025-12-13 00:26:54.690014212 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.693 [INFO][4315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.734 [INFO][4315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.735 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.772 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.784 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.791 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.794 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.797 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.797 [INFO][4315] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.799 [INFO][4315] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78 Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.804 [INFO][4315] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.812 [INFO][4315] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.812 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" host="localhost" Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.813 [INFO][4315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:54.865821 containerd[1623]: 2025-12-13 00:26:54.813 [INFO][4315] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" HandleID="k8s-pod-network.53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Workload="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" Dec 13 00:26:54.866603 containerd[1623]: 2025-12-13 00:26:54.818 [INFO][4187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Namespace="calico-system" Pod="calico-kube-controllers-679b4bb9cf-2wh8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0", GenerateName:"calico-kube-controllers-679b4bb9cf-", Namespace:"calico-system", SelfLink:"", UID:"1cc720b4-f00a-45c7-ad27-5d1040c88fe5", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"679b4bb9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-679b4bb9cf-2wh8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib1e219bc405", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:54.866603 containerd[1623]: 2025-12-13 00:26:54.818 [INFO][4187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Namespace="calico-system" Pod="calico-kube-controllers-679b4bb9cf-2wh8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" Dec 13 00:26:54.866603 containerd[1623]: 2025-12-13 00:26:54.818 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1e219bc405 ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Namespace="calico-system" Pod="calico-kube-controllers-679b4bb9cf-2wh8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" Dec 13 00:26:54.866603 containerd[1623]: 2025-12-13 00:26:54.825 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Namespace="calico-system" Pod="calico-kube-controllers-679b4bb9cf-2wh8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" Dec 13 00:26:54.866603 containerd[1623]: 2025-12-13 00:26:54.827 [INFO][4187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Namespace="calico-system" Pod="calico-kube-controllers-679b4bb9cf-2wh8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0", GenerateName:"calico-kube-controllers-679b4bb9cf-", Namespace:"calico-system", SelfLink:"", UID:"1cc720b4-f00a-45c7-ad27-5d1040c88fe5", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"679b4bb9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78", Pod:"calico-kube-controllers-679b4bb9cf-2wh8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib1e219bc405", MAC:"f6:b2:4b:4c:5f:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:54.866603 containerd[1623]: 2025-12-13 00:26:54.840 [INFO][4187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" Namespace="calico-system" Pod="calico-kube-controllers-679b4bb9cf-2wh8x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--679b4bb9cf--2wh8x-eth0" Dec 13 00:26:54.905713 systemd[1]: Started cri-containerd-3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8.scope - libcontainer container 3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8. Dec 13 00:26:54.929486 containerd[1623]: time="2025-12-13T00:26:54.929367245Z" level=info msg="connecting to shim 53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78" address="unix:///run/containerd/s/3d919a19c162b82390ae7f857905b60ff8bb532c8552d29187b17a4b671579f8" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:54.930244 kubelet[2802]: E1213 00:26:54.929891 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0" Dec 13 00:26:54.934000 audit: BPF prog-id=199 op=LOAD Dec 13 00:26:54.936000 audit: BPF prog-id=200 op=LOAD Dec 13 00:26:54.936000 audit[4384]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4370 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364646435633164653733613864323233633133386537353633303465 Dec 13 00:26:54.936000 audit: BPF prog-id=200 op=UNLOAD Dec 13 00:26:54.936000 audit[4384]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4370 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364646435633164653733613864323233633133386537353633303465 Dec 13 00:26:54.936000 audit: BPF prog-id=201 op=LOAD Dec 13 00:26:54.936000 audit[4384]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4370 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364646435633164653733613864323233633133386537353633303465 Dec 13 00:26:54.936000 audit: BPF prog-id=202 op=LOAD Dec 13 00:26:54.936000 audit[4384]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4370 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364646435633164653733613864323233633133386537353633303465 Dec 13 00:26:54.936000 audit: BPF prog-id=202 op=UNLOAD Dec 13 00:26:54.936000 audit[4384]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4370 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364646435633164653733613864323233633133386537353633303465 Dec 13 00:26:54.936000 audit: BPF prog-id=201 op=UNLOAD Dec 13 00:26:54.936000 audit[4384]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4370 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364646435633164653733613864323233633133386537353633303465 Dec 13 00:26:54.936000 audit: BPF prog-id=203 op=LOAD Dec 13 00:26:54.936000 audit[4384]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4370 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364646435633164653733613864323233633133386537353633303465 Dec 13 00:26:54.939585 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:54.944540 kubelet[2802]: E1213 00:26:54.944463 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:54.970671 kubelet[2802]: I1213 00:26:54.970585 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2xd67" podStartSLOduration=47.970535019 podStartE2EDuration="47.970535019s" podCreationTimestamp="2025-12-13 00:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:54.968288716 +0000 UTC m=+54.546567230" watchObservedRunningTime="2025-12-13 00:26:54.970535019 +0000 UTC m=+54.548813523" Dec 13 00:26:54.990000 audit[4449]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:54.990000 audit[4449]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd85e2e3d0 a2=0 a3=7ffd85e2e3bc items=0 ppid=2915 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:54.995652 systemd[1]: Started cri-containerd-53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78.scope - libcontainer container 53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78. Dec 13 00:26:54.998000 audit[4449]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:54.998000 audit[4449]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd85e2e3d0 a2=0 a3=0 items=0 ppid=2915 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:55.004047 containerd[1623]: time="2025-12-13T00:26:55.003996340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v228q,Uid:2a989c1a-b7fa-40df-8d15-8ca04e746459,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8\"" Dec 13 00:26:55.005404 kubelet[2802]: E1213 00:26:55.005085 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:55.013233 containerd[1623]: time="2025-12-13T00:26:55.013174822Z" level=info msg="CreateContainer within sandbox \"3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:26:55.014000 audit: BPF prog-id=204 op=LOAD Dec 13 00:26:55.015000 audit: BPF prog-id=205 op=LOAD Dec 13 00:26:55.015000 audit[4443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4432 pid=4443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633064383461393836333161613861633966316363626562646131 Dec 13 00:26:55.015000 audit: BPF prog-id=205 op=UNLOAD Dec 13 00:26:55.015000 audit[4443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633064383461393836333161613861633966316363626562646131 Dec 13 00:26:55.015000 audit: BPF prog-id=206 op=LOAD Dec 13 00:26:55.015000 audit[4443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4432 pid=4443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633064383461393836333161613861633966316363626562646131 Dec 13 00:26:55.015000 audit: BPF prog-id=207 op=LOAD Dec 13 00:26:55.015000 audit[4443]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4432 pid=4443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633064383461393836333161613861633966316363626562646131 Dec 13 00:26:55.016000 audit: BPF prog-id=207 op=UNLOAD Dec 13 00:26:55.016000 audit[4443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633064383461393836333161613861633966316363626562646131 Dec 13 00:26:55.016000 audit: BPF prog-id=206 op=UNLOAD Dec 13 00:26:55.016000 audit[4443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633064383461393836333161613861633966316363626562646131 Dec 13 00:26:55.016000 audit: BPF prog-id=208 op=LOAD Dec 13 00:26:55.016000 audit[4443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4432 pid=4443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633064383461393836333161613861633966316363626562646131 Dec 13 00:26:55.021457 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:55.020000 audit[4474]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:55.020000 audit[4474]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd8b392390 a2=0 a3=7ffd8b39237c items=0 ppid=2915 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:55.026000 audit[4474]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4474 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:55.026000 audit[4474]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd8b392390 a2=0 a3=0 items=0 ppid=2915 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:55.035614 containerd[1623]: time="2025-12-13T00:26:55.035564029Z" level=info msg="Container 075d9a3336daf1b4a1b3d75aecea40b38c377c98b335df9ca6ae33ca81e135ba: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:55.044973 containerd[1623]: time="2025-12-13T00:26:55.044915385Z" level=info msg="CreateContainer within sandbox \"3ddd5c1de73a8d223c138e756304ef52825d961e68336e81e8e43f2b3f1e0ff8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"075d9a3336daf1b4a1b3d75aecea40b38c377c98b335df9ca6ae33ca81e135ba\"" Dec 13 00:26:55.047762 containerd[1623]: time="2025-12-13T00:26:55.046586700Z" level=info msg="StartContainer for \"075d9a3336daf1b4a1b3d75aecea40b38c377c98b335df9ca6ae33ca81e135ba\"" Dec 13 00:26:55.051107 containerd[1623]: time="2025-12-13T00:26:55.051028760Z" level=info msg="connecting to shim 075d9a3336daf1b4a1b3d75aecea40b38c377c98b335df9ca6ae33ca81e135ba" address="unix:///run/containerd/s/a72e9986723bc17af841e6524eeddff77c208e4f73e38aee5e5ef85b5d403011" protocol=ttrpc version=3 Dec 13 00:26:55.069762 containerd[1623]: time="2025-12-13T00:26:55.069718881Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:55.073910 containerd[1623]: time="2025-12-13T00:26:55.073829599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:26:55.074092 containerd[1623]: time="2025-12-13T00:26:55.073994178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:55.076884 kubelet[2802]: E1213 00:26:55.076843 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:26:55.077639 kubelet[2802]: E1213 00:26:55.077063 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:26:55.077639 kubelet[2802]: E1213 00:26:55.077587 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9c52d5ff733e47bb827dcc82f1f2eefb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdkdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-754d866664-bf6vf_calico-system(331b009d-3266-4003-85e6-8aaabc469f22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:55.088538 containerd[1623]: time="2025-12-13T00:26:55.088491926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-679b4bb9cf-2wh8x,Uid:1cc720b4-f00a-45c7-ad27-5d1040c88fe5,Namespace:calico-system,Attempt:0,} returns sandbox id \"53c0d84a98631aa8ac9f1ccbebda1e3bba9c94b15d18fd79f8632907a40a5b78\"" Dec 13 00:26:55.089097 containerd[1623]: time="2025-12-13T00:26:55.088954043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:26:55.092000 audit: BPF prog-id=209 op=LOAD Dec 13 00:26:55.092000 audit[4503]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec534cad0 a2=98 a3=1fffffffffffffff items=0 ppid=4237 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.092000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:55.092000 audit: BPF prog-id=209 op=UNLOAD Dec 13 00:26:55.092000 audit[4503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffec534caa0 a3=0 items=0 ppid=4237 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.092000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:55.093000 audit: BPF prog-id=210 op=LOAD Dec 13 00:26:55.093000 audit[4503]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec534c9b0 a2=94 a3=3 items=0 ppid=4237 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.093000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:55.093000 audit: BPF prog-id=210 op=UNLOAD Dec 13 00:26:55.093000 audit[4503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffec534c9b0 a2=94 a3=3 items=0 ppid=4237 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.093000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:55.093000 audit: BPF prog-id=211 op=LOAD Dec 13 00:26:55.093000 audit[4503]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec534c9f0 a2=94 a3=7ffec534cbd0 items=0 ppid=4237 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.093000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:55.093000 audit: BPF prog-id=211 op=UNLOAD Dec 13 00:26:55.093000 audit[4503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffec534c9f0 a2=94 a3=7ffec534cbd0 items=0 ppid=4237 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.093000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:55.096000 audit: BPF prog-id=212 op=LOAD Dec 13 00:26:55.096000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf5bf14e0 a2=98 a3=3 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.097000 audit: BPF prog-id=212 op=UNLOAD Dec 13 00:26:55.097000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcf5bf14b0 a3=0 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.097000 audit: BPF prog-id=213 op=LOAD Dec 13 00:26:55.097000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf5bf12d0 a2=94 a3=54428f items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.097000 audit: BPF prog-id=213 op=UNLOAD Dec 13 00:26:55.097000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf5bf12d0 a2=94 a3=54428f items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.097000 audit: BPF prog-id=214 op=LOAD Dec 13 00:26:55.097000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf5bf1300 a2=94 a3=2 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.097000 audit: BPF prog-id=214 op=UNLOAD Dec 13 00:26:55.097000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf5bf1300 a2=0 a3=2 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.102580 systemd[1]: Started cri-containerd-075d9a3336daf1b4a1b3d75aecea40b38c377c98b335df9ca6ae33ca81e135ba.scope - libcontainer container 075d9a3336daf1b4a1b3d75aecea40b38c377c98b335df9ca6ae33ca81e135ba. Dec 13 00:26:55.121000 audit: BPF prog-id=215 op=LOAD Dec 13 00:26:55.122000 audit: BPF prog-id=216 op=LOAD Dec 13 00:26:55.122000 audit[4478]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4370 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037356439613333333664616631623461316233643735616563656134 Dec 13 00:26:55.122000 audit: BPF prog-id=216 op=UNLOAD Dec 13 00:26:55.122000 audit[4478]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4370 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037356439613333333664616631623461316233643735616563656134 Dec 13 00:26:55.123000 audit: BPF prog-id=217 op=LOAD Dec 13 00:26:55.123000 audit[4478]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4370 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037356439613333333664616631623461316233643735616563656134 Dec 13 00:26:55.123000 audit: BPF prog-id=218 op=LOAD Dec 13 00:26:55.123000 audit[4478]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4370 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037356439613333333664616631623461316233643735616563656134 Dec 13 00:26:55.123000 audit: BPF prog-id=218 op=UNLOAD Dec 13 00:26:55.123000 audit[4478]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4370 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037356439613333333664616631623461316233643735616563656134 Dec 13 00:26:55.123000 audit: BPF prog-id=217 op=UNLOAD Dec 13 00:26:55.123000 audit[4478]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4370 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037356439613333333664616631623461316233643735616563656134 Dec 13 00:26:55.124000 audit: BPF prog-id=219 op=LOAD Dec 13 00:26:55.124000 audit[4478]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4370 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037356439613333333664616631623461316233643735616563656134 Dec 13 00:26:55.258600 systemd-networkd[1314]: cali70cfc839f77: Gained IPv6LL Dec 13 00:26:55.271641 containerd[1623]: time="2025-12-13T00:26:55.271584855Z" level=info msg="StartContainer for \"075d9a3336daf1b4a1b3d75aecea40b38c377c98b335df9ca6ae33ca81e135ba\" returns successfully" Dec 13 00:26:55.333000 audit: BPF prog-id=220 op=LOAD Dec 13 00:26:55.333000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf5bf11c0 a2=94 a3=1 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.333000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.334000 audit: BPF prog-id=220 op=UNLOAD Dec 13 00:26:55.334000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcf5bf11c0 a2=94 a3=1 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.334000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.343000 audit: BPF prog-id=221 op=LOAD Dec 13 00:26:55.343000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf5bf11b0 a2=94 a3=4 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.343000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.343000 audit: BPF prog-id=221 op=UNLOAD Dec 13 00:26:55.343000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcf5bf11b0 a2=0 a3=4 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.343000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.343000 audit: BPF prog-id=222 op=LOAD Dec 13 00:26:55.343000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcf5bf1010 a2=94 a3=5 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.343000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.343000 audit: BPF prog-id=222 op=UNLOAD Dec 13 00:26:55.343000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcf5bf1010 a2=0 a3=5 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.343000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.343000 audit: BPF prog-id=223 op=LOAD Dec 13 00:26:55.343000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf5bf1230 a2=94 a3=6 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.343000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.343000 audit: BPF prog-id=223 op=UNLOAD Dec 13 00:26:55.343000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcf5bf1230 a2=0 a3=6 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.343000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.344000 audit: BPF prog-id=224 op=LOAD Dec 13 00:26:55.344000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf5bf09e0 a2=94 a3=88 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.344000 audit: BPF prog-id=225 op=LOAD Dec 13 00:26:55.344000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcf5bf0860 a2=94 a3=2 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.344000 audit: BPF prog-id=225 op=UNLOAD Dec 13 00:26:55.344000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcf5bf0890 a2=0 a3=7ffcf5bf0990 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.344000 audit: BPF prog-id=224 op=UNLOAD Dec 13 00:26:55.344000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=240ebd10 a2=0 a3=42c59b908f28fd38 items=0 ppid=4237 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:55.353000 audit: BPF prog-id=226 op=LOAD Dec 13 00:26:55.353000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddd3ab090 a2=98 a3=1999999999999999 items=0 ppid=4237 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.353000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:55.353000 audit: BPF prog-id=226 op=UNLOAD Dec 13 00:26:55.353000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffddd3ab060 a3=0 items=0 ppid=4237 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.353000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:55.353000 audit: BPF prog-id=227 op=LOAD Dec 13 00:26:55.353000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddd3aaf70 a2=94 a3=ffff items=0 ppid=4237 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.353000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:55.353000 audit: BPF prog-id=227 op=UNLOAD Dec 13 00:26:55.353000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffddd3aaf70 a2=94 a3=ffff items=0 ppid=4237 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.353000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:55.353000 audit: BPF prog-id=228 op=LOAD Dec 13 00:26:55.353000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddd3aafb0 a2=94 a3=7ffddd3ab190 items=0 ppid=4237 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.353000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:55.353000 audit: BPF prog-id=228 op=UNLOAD Dec 13 00:26:55.353000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffddd3aafb0 a2=94 a3=7ffddd3ab190 items=0 ppid=4237 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.353000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:55.412367 containerd[1623]: time="2025-12-13T00:26:55.412312474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:55.416267 systemd-networkd[1314]: vxlan.calico: Link UP Dec 13 00:26:55.416275 systemd-networkd[1314]: vxlan.calico: Gained carrier Dec 13 00:26:55.429000 audit: BPF prog-id=229 op=LOAD Dec 13 00:26:55.429000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe796b3040 a2=98 a3=0 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.429000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.429000 audit: BPF prog-id=229 op=UNLOAD Dec 13 00:26:55.429000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe796b3010 a3=0 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.429000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.430000 audit: BPF prog-id=230 op=LOAD Dec 13 00:26:55.430000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe796b2e50 a2=94 a3=54428f items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.430000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.430000 audit: BPF prog-id=230 op=UNLOAD Dec 13 00:26:55.430000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe796b2e50 a2=94 a3=54428f items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.430000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.430000 audit: BPF prog-id=231 op=LOAD Dec 13 00:26:55.430000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe796b2e80 a2=94 a3=2 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.430000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.430000 audit: BPF prog-id=231 op=UNLOAD Dec 13 00:26:55.430000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe796b2e80 a2=0 a3=2 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.430000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.430000 audit: BPF prog-id=232 op=LOAD Dec 13 00:26:55.430000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe796b2c30 a2=94 a3=4 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.430000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.430000 audit: BPF prog-id=232 op=UNLOAD Dec 13 00:26:55.430000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe796b2c30 a2=94 a3=4 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.430000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.430000 audit: BPF prog-id=233 op=LOAD Dec 13 00:26:55.430000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe796b2d30 a2=94 a3=7ffe796b2eb0 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.430000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.430000 audit: BPF prog-id=233 op=UNLOAD Dec 13 00:26:55.430000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe796b2d30 a2=0 a3=7ffe796b2eb0 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.430000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.432000 audit: BPF prog-id=234 op=LOAD Dec 13 00:26:55.432000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe796b2460 a2=94 a3=2 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.432000 audit: BPF prog-id=234 op=UNLOAD Dec 13 00:26:55.432000 audit[4556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe796b2460 a2=0 a3=2 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.432000 audit: BPF prog-id=235 op=LOAD Dec 13 00:26:55.432000 audit[4556]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe796b2560 a2=94 a3=30 items=0 ppid=4237 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:55.443000 audit: BPF prog-id=236 op=LOAD Dec 13 00:26:55.443000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe154a8930 a2=98 a3=0 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.443000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.443000 audit: BPF prog-id=236 op=UNLOAD Dec 13 00:26:55.443000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe154a8900 a3=0 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.443000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.443000 audit: BPF prog-id=237 op=LOAD Dec 13 00:26:55.443000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe154a8720 a2=94 a3=54428f items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.443000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.443000 audit: BPF prog-id=237 op=UNLOAD Dec 13 00:26:55.443000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe154a8720 a2=94 a3=54428f items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.443000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.443000 audit: BPF prog-id=238 op=LOAD Dec 13 00:26:55.443000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe154a8750 a2=94 a3=2 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.443000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.444000 audit: BPF prog-id=238 op=UNLOAD Dec 13 00:26:55.444000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe154a8750 a2=0 a3=2 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.444000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.446708 containerd[1623]: time="2025-12-13T00:26:55.445181094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:55.446708 containerd[1623]: time="2025-12-13T00:26:55.445289657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:26:55.446809 kubelet[2802]: E1213 00:26:55.446369 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:26:55.446809 kubelet[2802]: E1213 00:26:55.446545 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:26:55.448032 kubelet[2802]: E1213 00:26:55.447863 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdkdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-754d866664-bf6vf_calico-system(331b009d-3266-4003-85e6-8aaabc469f22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:55.448167 containerd[1623]: time="2025-12-13T00:26:55.447998106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:26:55.450729 kubelet[2802]: E1213 00:26:55.450662 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-754d866664-bf6vf" podUID="331b009d-3266-4003-85e6-8aaabc469f22" Dec 13 00:26:55.578543 systemd-networkd[1314]: cali26ff4856297: Gained IPv6LL Dec 13 00:26:55.652000 audit: BPF prog-id=239 op=LOAD Dec 13 00:26:55.652000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe154a8610 a2=94 a3=1 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.652000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.652000 audit: BPF prog-id=239 op=UNLOAD Dec 13 00:26:55.652000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe154a8610 a2=94 a3=1 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.652000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.664000 audit: BPF prog-id=240 op=LOAD Dec 13 00:26:55.664000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe154a8600 a2=94 a3=4 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.664000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.664000 audit: BPF prog-id=240 op=UNLOAD Dec 13 00:26:55.664000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe154a8600 a2=0 a3=4 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.664000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.664000 audit: BPF prog-id=241 op=LOAD Dec 13 00:26:55.664000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe154a8460 a2=94 a3=5 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.664000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.664000 audit: BPF prog-id=241 op=UNLOAD Dec 13 00:26:55.664000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe154a8460 a2=0 a3=5 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.664000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.665000 audit: BPF prog-id=242 op=LOAD Dec 13 00:26:55.665000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe154a8680 a2=94 a3=6 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.665000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.665000 audit: BPF prog-id=242 op=UNLOAD Dec 13 00:26:55.665000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe154a8680 a2=0 a3=6 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.665000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.665000 audit: BPF prog-id=243 op=LOAD Dec 13 00:26:55.665000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe154a7e30 a2=94 a3=88 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.665000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.665000 audit: BPF prog-id=244 op=LOAD Dec 13 00:26:55.665000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe154a7cb0 a2=94 a3=2 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.665000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.665000 audit: BPF prog-id=244 op=UNLOAD Dec 13 00:26:55.665000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe154a7ce0 a2=0 a3=7ffe154a7de0 items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.665000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.666000 audit: BPF prog-id=243 op=UNLOAD Dec 13 00:26:55.666000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=30b5dd10 a2=0 a3=dc8c17a2d2daef8a items=0 ppid=4237 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.666000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:55.674000 audit: BPF prog-id=235 op=UNLOAD Dec 13 00:26:55.674000 audit[4237]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00090c1c0 a2=0 a3=0 items=0 ppid=4218 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.674000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 13 00:26:55.706639 systemd-networkd[1314]: caliaf844692d8a: Gained IPv6LL Dec 13 00:26:55.738000 audit[4593]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4593 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:55.738000 audit[4593]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc0ff03200 a2=0 a3=7ffc0ff031ec items=0 ppid=4237 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.738000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:55.742000 audit[4596]: NETFILTER_CFG table=nat:126 family=2 entries=15 op=nft_register_chain pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:55.742000 audit[4596]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc3f5c7b30 a2=0 a3=7ffc3f5c7b1c items=0 ppid=4237 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.742000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:55.743000 audit[4591]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:55.743000 audit[4591]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc01613a80 a2=0 a3=7ffc01613a6c items=0 ppid=4237 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.743000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:55.747000 audit[4592]: NETFILTER_CFG table=filter:128 family=2 entries=222 op=nft_register_chain pid=4592 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:55.747000 audit[4592]: SYSCALL arch=c000003e syscall=46 success=yes exit=129820 a0=3 a1=7ffe252bed40 a2=0 a3=7ffe252bed2c items=0 ppid=4237 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.747000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:55.946521 kubelet[2802]: E1213 00:26:55.946364 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:55.948371 kubelet[2802]: E1213 00:26:55.948303 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0" Dec 13 00:26:55.948898 kubelet[2802]: E1213 00:26:55.948842 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-754d866664-bf6vf" podUID="331b009d-3266-4003-85e6-8aaabc469f22" Dec 13 00:26:55.949262 kubelet[2802]: E1213 00:26:55.948953 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:55.959729 containerd[1623]: time="2025-12-13T00:26:55.959674194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:55.985201 kubelet[2802]: I1213 00:26:55.983560 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v228q" podStartSLOduration=48.983544468 podStartE2EDuration="48.983544468s" podCreationTimestamp="2025-12-13 00:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:55.982601309 +0000 UTC m=+55.560879813" watchObservedRunningTime="2025-12-13 00:26:55.983544468 +0000 UTC m=+55.561822972" Dec 13 00:26:55.985458 containerd[1623]: time="2025-12-13T00:26:55.984068922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:55.985458 containerd[1623]: time="2025-12-13T00:26:55.984235915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:26:55.985930 kubelet[2802]: E1213 00:26:55.985761 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:26:55.985930 kubelet[2802]: E1213 00:26:55.985834 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:26:55.986148 kubelet[2802]: E1213 00:26:55.985989 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twjcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-679b4bb9cf-2wh8x_calico-system(1cc720b4-f00a-45c7-ad27-5d1040c88fe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:55.989059 kubelet[2802]: E1213 00:26:55.988999 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" podUID="1cc720b4-f00a-45c7-ad27-5d1040c88fe5" Dec 13 00:26:55.999000 audit[4604]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:55.999000 audit[4604]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc393776e0 a2=0 a3=7ffc393776cc items=0 ppid=2915 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:55.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:56.006000 audit[4604]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:56.006000 audit[4604]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc393776e0 a2=0 a3=0 items=0 ppid=2915 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:56.282551 systemd-networkd[1314]: calib1e219bc405: Gained IPv6LL Dec 13 00:26:56.518870 containerd[1623]: time="2025-12-13T00:26:56.518690210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lmfwt,Uid:2103450f-178f-4fe4-be81-451ab8c0d111,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:56.602551 systemd-networkd[1314]: cali51e6e6f633c: Gained IPv6LL Dec 13 00:26:56.604433 systemd-networkd[1314]: vxlan.calico: Gained IPv6LL Dec 13 00:26:56.679818 systemd-networkd[1314]: cali309fe988cb5: Link UP Dec 13 00:26:56.680615 systemd-networkd[1314]: cali309fe988cb5: Gained carrier Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.563 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--lmfwt-eth0 goldmane-666569f655- calico-system 2103450f-178f-4fe4-be81-451ab8c0d111 920 0 2025-12-13 00:26:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-lmfwt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali309fe988cb5 [] [] }} ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Namespace="calico-system" Pod="goldmane-666569f655-lmfwt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmfwt-" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.563 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Namespace="calico-system" Pod="goldmane-666569f655-lmfwt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmfwt-eth0" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.593 [INFO][4619] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" HandleID="k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Workload="localhost-k8s-goldmane--666569f655--lmfwt-eth0" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.593 [INFO][4619] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" HandleID="k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Workload="localhost-k8s-goldmane--666569f655--lmfwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-lmfwt", "timestamp":"2025-12-13 00:26:56.593441243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.593 [INFO][4619] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.593 [INFO][4619] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.593 [INFO][4619] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.601 [INFO][4619] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.607 [INFO][4619] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.611 [INFO][4619] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.612 [INFO][4619] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.615 [INFO][4619] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.615 [INFO][4619] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.616 [INFO][4619] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.639 [INFO][4619] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.673 [INFO][4619] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.673 [INFO][4619] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" host="localhost" Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.673 [INFO][4619] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:56.719354 containerd[1623]: 2025-12-13 00:26:56.673 [INFO][4619] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" HandleID="k8s-pod-network.b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Workload="localhost-k8s-goldmane--666569f655--lmfwt-eth0" Dec 13 00:26:56.720230 containerd[1623]: 2025-12-13 00:26:56.677 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Namespace="calico-system" Pod="goldmane-666569f655-lmfwt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmfwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lmfwt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2103450f-178f-4fe4-be81-451ab8c0d111", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-lmfwt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali309fe988cb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:56.720230 containerd[1623]: 2025-12-13 00:26:56.677 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Namespace="calico-system" Pod="goldmane-666569f655-lmfwt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmfwt-eth0" Dec 13 00:26:56.720230 containerd[1623]: 2025-12-13 00:26:56.677 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali309fe988cb5 ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Namespace="calico-system" Pod="goldmane-666569f655-lmfwt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmfwt-eth0" Dec 13 00:26:56.720230 containerd[1623]: 2025-12-13 00:26:56.680 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Namespace="calico-system" Pod="goldmane-666569f655-lmfwt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmfwt-eth0" Dec 13 00:26:56.720230 containerd[1623]: 2025-12-13 00:26:56.681 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Namespace="calico-system" Pod="goldmane-666569f655-lmfwt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmfwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lmfwt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2103450f-178f-4fe4-be81-451ab8c0d111", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf", Pod:"goldmane-666569f655-lmfwt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali309fe988cb5", MAC:"46:46:d3:e3:0b:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:56.720230 containerd[1623]: 2025-12-13 00:26:56.715 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" Namespace="calico-system" Pod="goldmane-666569f655-lmfwt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmfwt-eth0" Dec 13 00:26:56.760000 audit[4639]: NETFILTER_CFG table=filter:131 family=2 entries=60 op=nft_register_chain pid=4639 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:56.760000 audit[4639]: SYSCALL arch=c000003e syscall=46 success=yes exit=29932 a0=3 a1=7fff81367ec0 a2=0 a3=7fff81367eac items=0 ppid=4237 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.760000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:56.786260 containerd[1623]: time="2025-12-13T00:26:56.786208314Z" level=info msg="connecting to shim b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf" address="unix:///run/containerd/s/d9719f87de5186200bcf7eebc4a4ca2cdbcc38eabb0b9fadb20d34493d8af6a7" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:56.828688 systemd[1]: Started cri-containerd-b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf.scope - libcontainer container b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf. Dec 13 00:26:56.841000 audit: BPF prog-id=245 op=LOAD Dec 13 00:26:56.842000 audit: BPF prog-id=246 op=LOAD Dec 13 00:26:56.842000 audit[4660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4648 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237366434613632663037383362316262643737626634656261363439 Dec 13 00:26:56.842000 audit: BPF prog-id=246 op=UNLOAD Dec 13 00:26:56.842000 audit[4660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237366434613632663037383362316262643737626634656261363439 Dec 13 00:26:56.842000 audit: BPF prog-id=247 op=LOAD Dec 13 00:26:56.842000 audit[4660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4648 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237366434613632663037383362316262643737626634656261363439 Dec 13 00:26:56.842000 audit: BPF prog-id=248 op=LOAD Dec 13 00:26:56.842000 audit[4660]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4648 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237366434613632663037383362316262643737626634656261363439 Dec 13 00:26:56.842000 audit: BPF prog-id=248 op=UNLOAD Dec 13 00:26:56.842000 audit[4660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237366434613632663037383362316262643737626634656261363439 Dec 13 00:26:56.842000 audit: BPF prog-id=247 op=UNLOAD Dec 13 00:26:56.842000 audit[4660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237366434613632663037383362316262643737626634656261363439 Dec 13 00:26:56.842000 audit: BPF prog-id=249 op=LOAD Dec 13 00:26:56.842000 audit[4660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4648 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:56.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237366434613632663037383362316262643737626634656261363439 Dec 13 00:26:56.845038 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:56.880640 containerd[1623]: time="2025-12-13T00:26:56.880497197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lmfwt,Uid:2103450f-178f-4fe4-be81-451ab8c0d111,Namespace:calico-system,Attempt:0,} returns sandbox id \"b76d4a62f0783b1bbd77bf4eba649be435ef30ad459a76952e829fb8318630bf\"" Dec 13 00:26:56.882672 containerd[1623]: time="2025-12-13T00:26:56.882635918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:26:56.959845 kubelet[2802]: E1213 00:26:56.959790 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:56.960848 kubelet[2802]: E1213 00:26:56.960050 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:56.960848 kubelet[2802]: E1213 00:26:56.960336 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" podUID="1cc720b4-f00a-45c7-ad27-5d1040c88fe5" Dec 13 00:26:57.035000 audit[4687]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=4687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:57.038241 kernel: kauditd_printk_skb: 411 callbacks suppressed Dec 13 00:26:57.038325 kernel: audit: type=1325 audit(1765585617.035:729): table=filter:132 family=2 entries=17 op=nft_register_rule pid=4687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:57.035000 audit[4687]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe092f2c20 a2=0 a3=7ffe092f2c0c items=0 ppid=2915 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:57.048887 kernel: audit: type=1300 audit(1765585617.035:729): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe092f2c20 a2=0 a3=7ffe092f2c0c items=0 ppid=2915 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:57.048975 kernel: audit: type=1327 audit(1765585617.035:729): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:57.035000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:57.061000 audit[4687]: NETFILTER_CFG table=nat:133 family=2 entries=47 op=nft_register_chain pid=4687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:57.061000 audit[4687]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe092f2c20 a2=0 a3=7ffe092f2c0c items=0 ppid=2915 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:57.073519 kernel: audit: type=1325 audit(1765585617.061:730): table=nat:133 family=2 entries=47 op=nft_register_chain pid=4687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:57.073663 kernel: audit: type=1300 audit(1765585617.061:730): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe092f2c20 a2=0 a3=7ffe092f2c0c items=0 ppid=2915 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:57.073695 kernel: audit: type=1327 audit(1765585617.061:730): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:57.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:57.208983 containerd[1623]: time="2025-12-13T00:26:57.208821712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:57.308640 containerd[1623]: time="2025-12-13T00:26:57.308553552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:57.308640 containerd[1623]: time="2025-12-13T00:26:57.308612944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:26:57.309019 kubelet[2802]: E1213 00:26:57.308951 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:26:57.309019 kubelet[2802]: E1213 00:26:57.309012 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:26:57.309231 kubelet[2802]: E1213 00:26:57.309170 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cv75f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lmfwt_calico-system(2103450f-178f-4fe4-be81-451ab8c0d111): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:57.310404 kubelet[2802]: E1213 00:26:57.310358 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmfwt" podUID="2103450f-178f-4fe4-be81-451ab8c0d111" Dec 13 00:26:57.519332 containerd[1623]: time="2025-12-13T00:26:57.519203108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7544896fd5-8pbbj,Uid:a727a70e-6988-4a99-89fe-438d343667ab,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:57.941465 systemd-networkd[1314]: cali85e897f443b: Link UP Dec 13 00:26:57.942497 systemd-networkd[1314]: cali85e897f443b: Gained carrier Dec 13 00:26:57.969352 kubelet[2802]: E1213 00:26:57.969316 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:57.970087 kubelet[2802]: E1213 00:26:57.969464 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:57.970270 kubelet[2802]: E1213 00:26:57.970231 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmfwt" podUID="2103450f-178f-4fe4-be81-451ab8c0d111" Dec 13 00:26:57.979361 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:33912.service - OpenSSH per-connection server daemon (10.0.0.1:33912). Dec 13 00:26:57.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.117:22-10.0.0.1:33912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:57.985434 kernel: audit: type=1130 audit(1765585617.978:731): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.117:22-10.0.0.1:33912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:58.080000 audit[4714]: USER_ACCT pid=4714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.083353 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 33912 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:26:58.084510 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:58.082000 audit[4714]: CRED_ACQ pid=4714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.089640 systemd-logind[1589]: New session 10 of user core. Dec 13 00:26:58.092911 kernel: audit: type=1101 audit(1765585618.080:732): pid=4714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.092960 kernel: audit: type=1103 audit(1765585618.082:733): pid=4714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.092981 kernel: audit: type=1006 audit(1765585618.082:734): pid=4714 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 00:26:58.082000 audit[4714]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd48cf3860 a2=3 a3=0 items=0 ppid=1 pid=4714 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:58.082000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:58.108719 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 00:26:58.110000 audit[4714]: USER_START pid=4714 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.112000 audit[4718]: CRED_ACQ pid=4718 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.631 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0 calico-apiserver-7544896fd5- calico-apiserver a727a70e-6988-4a99-89fe-438d343667ab 926 0 2025-12-13 00:26:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7544896fd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7544896fd5-8pbbj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85e897f443b [] [] }} ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-8pbbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.631 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-8pbbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.660 [INFO][4704] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" HandleID="k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Workload="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.660 [INFO][4704] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" HandleID="k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Workload="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004957a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7544896fd5-8pbbj", "timestamp":"2025-12-13 00:26:57.660098373 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.660 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.661 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.661 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.669 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.674 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.679 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.681 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.684 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.684 [INFO][4704] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.685 [INFO][4704] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3 Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.713 [INFO][4704] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.934 [INFO][4704] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.934 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" host="localhost" Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.934 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:58.286819 containerd[1623]: 2025-12-13 00:26:57.934 [INFO][4704] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" HandleID="k8s-pod-network.629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Workload="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" Dec 13 00:26:58.288194 containerd[1623]: 2025-12-13 00:26:57.938 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-8pbbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0", GenerateName:"calico-apiserver-7544896fd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a727a70e-6988-4a99-89fe-438d343667ab", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7544896fd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7544896fd5-8pbbj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85e897f443b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:58.288194 containerd[1623]: 2025-12-13 00:26:57.938 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-8pbbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" Dec 13 00:26:58.288194 containerd[1623]: 2025-12-13 00:26:57.938 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85e897f443b ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-8pbbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" Dec 13 00:26:58.288194 containerd[1623]: 2025-12-13 00:26:57.942 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-8pbbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" Dec 13 00:26:58.288194 containerd[1623]: 2025-12-13 00:26:57.943 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-8pbbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0", GenerateName:"calico-apiserver-7544896fd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a727a70e-6988-4a99-89fe-438d343667ab", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7544896fd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3", Pod:"calico-apiserver-7544896fd5-8pbbj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85e897f443b", MAC:"ea:ee:88:b2:44:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:58.288194 containerd[1623]: 2025-12-13 00:26:58.282 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" Namespace="calico-apiserver" Pod="calico-apiserver-7544896fd5-8pbbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7544896fd5--8pbbj-eth0" Dec 13 00:26:58.299526 sshd[4718]: Connection closed by 10.0.0.1 port 33912 Dec 13 00:26:58.302451 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:58.302000 audit[4714]: USER_END pid=4714 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.303000 audit[4714]: CRED_DISP pid=4714 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.307000 audit[4735]: NETFILTER_CFG table=filter:134 family=2 entries=57 op=nft_register_chain pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:58.307000 audit[4735]: SYSCALL arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7ffcd16d63d0 a2=0 a3=7ffcd16d63bc items=0 ppid=4237 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:58.307000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:58.309997 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:33912.service: Deactivated successfully. Dec 13 00:26:58.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.117:22-10.0.0.1:33912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:58.313611 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 00:26:58.314791 systemd-logind[1589]: Session 10 logged out. Waiting for processes to exit. Dec 13 00:26:58.316715 systemd-logind[1589]: Removed session 10. Dec 13 00:26:58.330798 systemd-networkd[1314]: cali309fe988cb5: Gained IPv6LL Dec 13 00:26:58.702748 containerd[1623]: time="2025-12-13T00:26:58.702687730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hxl5n,Uid:0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:58.953000 audit[4753]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:58.953000 audit[4753]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcfdc69820 a2=0 a3=7ffcfdc6980c items=0 ppid=2915 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:58.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:58.961000 audit[4753]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=4753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:58.961000 audit[4753]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcfdc69820 a2=0 a3=7ffcfdc6980c items=0 ppid=2915 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:58.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:58.989604 containerd[1623]: time="2025-12-13T00:26:58.989351740Z" level=info msg="connecting to shim 629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3" address="unix:///run/containerd/s/3575c7886c8a669cbd25c2085446120b9ef1a7b17b5c6f04ea15fc79620575f9" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:59.033952 systemd[1]: Started cri-containerd-629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3.scope - libcontainer container 629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3. Dec 13 00:26:59.059000 audit: BPF prog-id=250 op=LOAD Dec 13 00:26:59.060000 audit: BPF prog-id=251 op=LOAD Dec 13 00:26:59.060000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4767 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396365386634313732343861633534623330623464323134373765 Dec 13 00:26:59.060000 audit: BPF prog-id=251 op=UNLOAD Dec 13 00:26:59.060000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396365386634313732343861633534623330623464323134373765 Dec 13 00:26:59.060000 audit: BPF prog-id=252 op=LOAD Dec 13 00:26:59.060000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4767 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396365386634313732343861633534623330623464323134373765 Dec 13 00:26:59.060000 audit: BPF prog-id=253 op=LOAD Dec 13 00:26:59.060000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4767 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396365386634313732343861633534623330623464323134373765 Dec 13 00:26:59.060000 audit: BPF prog-id=253 op=UNLOAD Dec 13 00:26:59.060000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396365386634313732343861633534623330623464323134373765 Dec 13 00:26:59.060000 audit: BPF prog-id=252 op=UNLOAD Dec 13 00:26:59.060000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4767 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396365386634313732343861633534623330623464323134373765 Dec 13 00:26:59.060000 audit: BPF prog-id=254 op=LOAD Dec 13 00:26:59.060000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4767 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632396365386634313732343861633534623330623464323134373765 Dec 13 00:26:59.065293 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:59.099349 systemd-networkd[1314]: cali3d653f8bbb6: Link UP Dec 13 00:26:59.100112 systemd-networkd[1314]: cali3d653f8bbb6: Gained carrier Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:58.995 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hxl5n-eth0 csi-node-driver- calico-system 0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77 780 0 2025-12-13 00:26:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hxl5n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3d653f8bbb6 [] [] }} ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Namespace="calico-system" Pod="csi-node-driver-hxl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--hxl5n-" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:58.995 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Namespace="calico-system" Pod="csi-node-driver-hxl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--hxl5n-eth0" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.042 [INFO][4783] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" HandleID="k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Workload="localhost-k8s-csi--node--driver--hxl5n-eth0" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.042 [INFO][4783] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" HandleID="k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Workload="localhost-k8s-csi--node--driver--hxl5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hxl5n", "timestamp":"2025-12-13 00:26:59.042230706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.042 [INFO][4783] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.042 [INFO][4783] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.042 [INFO][4783] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.053 [INFO][4783] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.066 [INFO][4783] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.071 [INFO][4783] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.073 [INFO][4783] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.075 [INFO][4783] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.075 [INFO][4783] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.077 [INFO][4783] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6 Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.083 [INFO][4783] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.090 [INFO][4783] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.090 [INFO][4783] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" host="localhost" Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.090 [INFO][4783] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:59.128035 containerd[1623]: 2025-12-13 00:26:59.090 [INFO][4783] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" HandleID="k8s-pod-network.13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Workload="localhost-k8s-csi--node--driver--hxl5n-eth0" Dec 13 00:26:59.128918 containerd[1623]: 2025-12-13 00:26:59.095 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Namespace="calico-system" Pod="csi-node-driver-hxl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--hxl5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hxl5n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hxl5n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d653f8bbb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:59.128918 containerd[1623]: 2025-12-13 00:26:59.095 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Namespace="calico-system" Pod="csi-node-driver-hxl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--hxl5n-eth0" Dec 13 00:26:59.128918 containerd[1623]: 2025-12-13 00:26:59.095 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d653f8bbb6 ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Namespace="calico-system" Pod="csi-node-driver-hxl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--hxl5n-eth0" Dec 13 00:26:59.128918 containerd[1623]: 2025-12-13 00:26:59.100 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Namespace="calico-system" Pod="csi-node-driver-hxl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--hxl5n-eth0" Dec 13 00:26:59.128918 containerd[1623]: 2025-12-13 00:26:59.102 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Namespace="calico-system" Pod="csi-node-driver-hxl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--hxl5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hxl5n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6", Pod:"csi-node-driver-hxl5n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d653f8bbb6", MAC:"2e:01:4d:86:86:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:59.128918 containerd[1623]: 2025-12-13 00:26:59.119 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" Namespace="calico-system" Pod="csi-node-driver-hxl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--hxl5n-eth0" Dec 13 00:26:59.142000 audit[4820]: NETFILTER_CFG table=filter:137 family=2 entries=66 op=nft_register_chain pid=4820 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:59.142000 audit[4820]: SYSCALL arch=c000003e syscall=46 success=yes exit=29556 a0=3 a1=7fff5ed27f70 a2=0 a3=7fff5ed27f5c items=0 ppid=4237 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.142000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:59.146782 containerd[1623]: time="2025-12-13T00:26:59.146734775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7544896fd5-8pbbj,Uid:a727a70e-6988-4a99-89fe-438d343667ab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"629ce8f417248ac54b30b4d21477e9a60fab7b6c97ccba24fdb97ec292fb64b3\"" Dec 13 00:26:59.149663 containerd[1623]: time="2025-12-13T00:26:59.149617942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:26:59.485283 systemd-networkd[1314]: cali85e897f443b: Gained IPv6LL Dec 13 00:26:59.495538 containerd[1623]: time="2025-12-13T00:26:59.495479868Z" level=info msg="connecting to shim 13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6" address="unix:///run/containerd/s/caed5735dff508ab30c4a30e836c78d2a60754b29acc5aa7cf5f2ef3ca470d49" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:59.530737 systemd[1]: Started cri-containerd-13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6.scope - libcontainer container 13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6. Dec 13 00:26:59.545000 audit: BPF prog-id=255 op=LOAD Dec 13 00:26:59.546000 audit: BPF prog-id=256 op=LOAD Dec 13 00:26:59.546000 audit[4841]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4829 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656265633463313835353730336332613034626630646364613036 Dec 13 00:26:59.546000 audit: BPF prog-id=256 op=UNLOAD Dec 13 00:26:59.546000 audit[4841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656265633463313835353730336332613034626630646364613036 Dec 13 00:26:59.546000 audit: BPF prog-id=257 op=LOAD Dec 13 00:26:59.546000 audit[4841]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4829 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656265633463313835353730336332613034626630646364613036 Dec 13 00:26:59.546000 audit: BPF prog-id=258 op=LOAD Dec 13 00:26:59.546000 audit[4841]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4829 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656265633463313835353730336332613034626630646364613036 Dec 13 00:26:59.547000 audit: BPF prog-id=258 op=UNLOAD Dec 13 00:26:59.547000 audit[4841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656265633463313835353730336332613034626630646364613036 Dec 13 00:26:59.547000 audit: BPF prog-id=257 op=UNLOAD Dec 13 00:26:59.547000 audit[4841]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656265633463313835353730336332613034626630646364613036 Dec 13 00:26:59.547000 audit: BPF prog-id=259 op=LOAD Dec 13 00:26:59.547000 audit[4841]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4829 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:59.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656265633463313835353730336332613034626630646364613036 Dec 13 00:26:59.549815 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:59.553780 containerd[1623]: time="2025-12-13T00:26:59.553730938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:59.744269 containerd[1623]: time="2025-12-13T00:26:59.744126092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:59.744269 containerd[1623]: time="2025-12-13T00:26:59.744174022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:26:59.744680 kubelet[2802]: E1213 00:26:59.744498 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:59.744680 kubelet[2802]: E1213 00:26:59.744565 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:59.745193 kubelet[2802]: E1213 00:26:59.744742 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvrp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7544896fd5-8pbbj_calico-apiserver(a727a70e-6988-4a99-89fe-438d343667ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:59.746115 kubelet[2802]: E1213 00:26:59.746052 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" podUID="a727a70e-6988-4a99-89fe-438d343667ab" Dec 13 00:26:59.858035 containerd[1623]: time="2025-12-13T00:26:59.857971239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hxl5n,Uid:0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77,Namespace:calico-system,Attempt:0,} returns sandbox id \"13ebec4c1855703c2a04bf0dcda0634887464fe3ff96884fffe70eea41224aa6\"" Dec 13 00:26:59.860431 containerd[1623]: time="2025-12-13T00:26:59.860394444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:26:59.980151 kubelet[2802]: E1213 00:26:59.980082 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" podUID="a727a70e-6988-4a99-89fe-438d343667ab" Dec 13 00:27:00.212000 audit[4867]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:00.212000 audit[4867]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb9989c90 a2=0 a3=7ffdb9989c7c items=0 ppid=2915 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:00.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:00.219000 audit[4867]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=4867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:00.219000 audit[4867]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb9989c90 a2=0 a3=7ffdb9989c7c items=0 ppid=2915 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:00.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:00.224127 containerd[1623]: time="2025-12-13T00:27:00.224054758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:00.240927 containerd[1623]: time="2025-12-13T00:27:00.240858289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:27:00.241113 containerd[1623]: time="2025-12-13T00:27:00.240891612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:00.241258 kubelet[2802]: E1213 00:27:00.241203 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:00.241333 kubelet[2802]: E1213 00:27:00.241265 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:00.241559 kubelet[2802]: E1213 00:27:00.241512 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs77h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxl5n_calico-system(0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:00.243603 containerd[1623]: time="2025-12-13T00:27:00.243555037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:27:00.604293 containerd[1623]: time="2025-12-13T00:27:00.604229003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:00.605527 containerd[1623]: time="2025-12-13T00:27:00.605491669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:27:00.605605 containerd[1623]: time="2025-12-13T00:27:00.605570949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:00.605833 kubelet[2802]: E1213 00:27:00.605759 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:00.605833 kubelet[2802]: E1213 00:27:00.605830 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:00.606096 kubelet[2802]: E1213 00:27:00.606035 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs77h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxl5n_calico-system(0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:00.607286 kubelet[2802]: E1213 00:27:00.607210 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:27:00.698568 systemd-networkd[1314]: cali3d653f8bbb6: Gained IPv6LL Dec 13 00:27:00.983349 kubelet[2802]: E1213 00:27:00.983061 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" podUID="a727a70e-6988-4a99-89fe-438d343667ab" Dec 13 00:27:00.983900 kubelet[2802]: E1213 00:27:00.983652 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:27:03.325963 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:46246.service - OpenSSH per-connection server daemon (10.0.0.1:46246). Dec 13 00:27:03.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.117:22-10.0.0.1:46246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.327603 kernel: kauditd_printk_skb: 69 callbacks suppressed Dec 13 00:27:03.327684 kernel: audit: type=1130 audit(1765585623.324:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.117:22-10.0.0.1:46246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.388000 audit[4879]: USER_ACCT pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.390181 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 46246 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:03.393329 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:03.389000 audit[4879]: CRED_ACQ pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.399308 systemd-logind[1589]: New session 11 of user core. Dec 13 00:27:03.400748 kernel: audit: type=1101 audit(1765585623.388:763): pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.400810 kernel: audit: type=1103 audit(1765585623.389:764): pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.403717 kernel: audit: type=1006 audit(1765585623.390:765): pid=4879 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 00:27:03.403764 kernel: audit: type=1300 audit(1765585623.390:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde25587b0 a2=3 a3=0 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:03.390000 audit[4879]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde25587b0 a2=3 a3=0 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:03.408856 kernel: audit: type=1327 audit(1765585623.390:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:03.390000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:03.416775 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 00:27:03.418000 audit[4879]: USER_START pid=4879 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.420000 audit[4883]: CRED_ACQ pid=4883 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.430656 kernel: audit: type=1105 audit(1765585623.418:766): pid=4879 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.430741 kernel: audit: type=1103 audit(1765585623.420:767): pid=4883 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.638461 sshd[4883]: Connection closed by 10.0.0.1 port 46246 Dec 13 00:27:03.638888 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:03.639000 audit[4879]: USER_END pid=4879 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.648253 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:46246.service: Deactivated successfully. Dec 13 00:27:03.639000 audit[4879]: CRED_DISP pid=4879 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.651746 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 00:27:03.652887 systemd-logind[1589]: Session 11 logged out. Waiting for processes to exit. Dec 13 00:27:03.655830 kernel: audit: type=1106 audit(1765585623.639:768): pid=4879 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.655893 kernel: audit: type=1104 audit(1765585623.639:769): pid=4879 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.117:22-10.0.0.1:46246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.655931 systemd-logind[1589]: Removed session 11. Dec 13 00:27:06.519370 containerd[1623]: time="2025-12-13T00:27:06.519322974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:06.982578 containerd[1623]: time="2025-12-13T00:27:06.982515649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:07.040670 containerd[1623]: time="2025-12-13T00:27:07.040588933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:07.040878 containerd[1623]: time="2025-12-13T00:27:07.040683515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:07.040974 kubelet[2802]: E1213 00:27:07.040909 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:07.041332 kubelet[2802]: E1213 00:27:07.040987 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:07.041332 kubelet[2802]: E1213 00:27:07.041144 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2szwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7544896fd5-wwf2v_calico-apiserver(baf0c6ad-ada1-4a24-b663-b32f96db48d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:07.042325 kubelet[2802]: E1213 00:27:07.042278 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0" Dec 13 00:27:08.655778 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:46248.service - OpenSSH per-connection server daemon (10.0.0.1:46248). Dec 13 00:27:08.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.117:22-10.0.0.1:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:08.657473 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:08.657518 kernel: audit: type=1130 audit(1765585628.654:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.117:22-10.0.0.1:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:08.721000 audit[4908]: USER_ACCT pid=4908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.722777 sshd[4908]: Accepted publickey for core from 10.0.0.1 port 46248 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:08.725597 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:08.722000 audit[4908]: CRED_ACQ pid=4908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.731780 systemd-logind[1589]: New session 12 of user core. Dec 13 00:27:08.734491 kernel: audit: type=1101 audit(1765585628.721:772): pid=4908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.734555 kernel: audit: type=1103 audit(1765585628.722:773): pid=4908 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.734578 kernel: audit: type=1006 audit(1765585628.722:774): pid=4908 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 13 00:27:08.722000 audit[4908]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc29d20d90 a2=3 a3=0 items=0 ppid=1 pid=4908 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:08.744748 kernel: audit: type=1300 audit(1765585628.722:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc29d20d90 a2=3 a3=0 items=0 ppid=1 pid=4908 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:08.744790 kernel: audit: type=1327 audit(1765585628.722:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:08.722000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:08.748676 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 00:27:08.750000 audit[4908]: USER_START pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.752000 audit[4912]: CRED_ACQ pid=4912 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.763939 kernel: audit: type=1105 audit(1765585628.750:775): pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.764034 kernel: audit: type=1103 audit(1765585628.752:776): pid=4912 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.941229 sshd[4912]: Connection closed by 10.0.0.1 port 46248 Dec 13 00:27:08.941605 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:08.942000 audit[4908]: USER_END pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.942000 audit[4908]: CRED_DISP pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.981656 kernel: audit: type=1106 audit(1765585628.942:777): pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.981847 kernel: audit: type=1104 audit(1765585628.942:778): pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:08.992700 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:46248.service: Deactivated successfully. Dec 13 00:27:08.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.117:22-10.0.0.1:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:08.995452 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 00:27:08.996362 systemd-logind[1589]: Session 12 logged out. Waiting for processes to exit. Dec 13 00:27:09.001781 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:46250.service - OpenSSH per-connection server daemon (10.0.0.1:46250). Dec 13 00:27:09.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.117:22-10.0.0.1:46250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:09.002525 systemd-logind[1589]: Removed session 12. Dec 13 00:27:09.073000 audit[4927]: USER_ACCT pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.075026 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 46250 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:09.074000 audit[4927]: CRED_ACQ pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.074000 audit[4927]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe87f81e30 a2=3 a3=0 items=0 ppid=1 pid=4927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:09.074000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:09.077486 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:09.083215 systemd-logind[1589]: New session 13 of user core. Dec 13 00:27:09.090549 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 00:27:09.091000 audit[4927]: USER_START pid=4927 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.093000 audit[4931]: CRED_ACQ pid=4931 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.224200 sshd[4931]: Connection closed by 10.0.0.1 port 46250 Dec 13 00:27:09.225776 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:09.226000 audit[4927]: USER_END pid=4927 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.226000 audit[4927]: CRED_DISP pid=4927 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.237471 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:46250.service: Deactivated successfully. Dec 13 00:27:09.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.117:22-10.0.0.1:46250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:09.244042 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 00:27:09.248710 systemd-logind[1589]: Session 13 logged out. Waiting for processes to exit. Dec 13 00:27:09.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.117:22-10.0.0.1:46264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:09.252000 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:46264.service - OpenSSH per-connection server daemon (10.0.0.1:46264). Dec 13 00:27:09.253998 systemd-logind[1589]: Removed session 13. Dec 13 00:27:09.321000 audit[4944]: USER_ACCT pid=4944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.323440 sshd[4944]: Accepted publickey for core from 10.0.0.1 port 46264 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:09.322000 audit[4944]: CRED_ACQ pid=4944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.323000 audit[4944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeac2e7310 a2=3 a3=0 items=0 ppid=1 pid=4944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:09.323000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:09.325717 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:09.331561 systemd-logind[1589]: New session 14 of user core. Dec 13 00:27:09.346727 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 00:27:09.348000 audit[4944]: USER_START pid=4944 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.349000 audit[4948]: CRED_ACQ pid=4948 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.435823 sshd[4948]: Connection closed by 10.0.0.1 port 46264 Dec 13 00:27:09.436150 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:09.436000 audit[4944]: USER_END pid=4944 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.436000 audit[4944]: CRED_DISP pid=4944 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.441804 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:46264.service: Deactivated successfully. Dec 13 00:27:09.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.117:22-10.0.0.1:46264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:09.444505 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 00:27:09.445582 systemd-logind[1589]: Session 14 logged out. Waiting for processes to exit. Dec 13 00:27:09.449191 systemd-logind[1589]: Removed session 14. Dec 13 00:27:10.530607 containerd[1623]: time="2025-12-13T00:27:10.530299476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:27:10.966560 containerd[1623]: time="2025-12-13T00:27:10.966507584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:11.006225 containerd[1623]: time="2025-12-13T00:27:11.006145109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:27:11.006364 containerd[1623]: time="2025-12-13T00:27:11.006223881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:11.006440 kubelet[2802]: E1213 00:27:11.006339 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:27:11.006440 kubelet[2802]: E1213 00:27:11.006409 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:27:11.006837 kubelet[2802]: E1213 00:27:11.006572 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twjcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-679b4bb9cf-2wh8x_calico-system(1cc720b4-f00a-45c7-ad27-5d1040c88fe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:11.007806 kubelet[2802]: E1213 00:27:11.007771 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" podUID="1cc720b4-f00a-45c7-ad27-5d1040c88fe5" Dec 13 00:27:11.519709 containerd[1623]: time="2025-12-13T00:27:11.519636852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:27:12.016732 containerd[1623]: time="2025-12-13T00:27:12.016669754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:12.090976 containerd[1623]: time="2025-12-13T00:27:12.090896124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:27:12.091199 containerd[1623]: time="2025-12-13T00:27:12.090905212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:12.091279 kubelet[2802]: E1213 00:27:12.091223 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:27:12.091692 kubelet[2802]: E1213 00:27:12.091293 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:27:12.091692 kubelet[2802]: E1213 00:27:12.091454 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9c52d5ff733e47bb827dcc82f1f2eefb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdkdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-754d866664-bf6vf_calico-system(331b009d-3266-4003-85e6-8aaabc469f22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:12.093771 containerd[1623]: time="2025-12-13T00:27:12.093717176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:27:12.450825 containerd[1623]: time="2025-12-13T00:27:12.450735559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:12.473091 containerd[1623]: time="2025-12-13T00:27:12.472997394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:27:12.473252 containerd[1623]: time="2025-12-13T00:27:12.473022893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:12.473482 kubelet[2802]: E1213 00:27:12.473352 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:27:12.473578 kubelet[2802]: E1213 00:27:12.473494 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:27:12.473752 kubelet[2802]: E1213 00:27:12.473676 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdkdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-754d866664-bf6vf_calico-system(331b009d-3266-4003-85e6-8aaabc469f22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:12.474992 kubelet[2802]: E1213 00:27:12.474929 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-754d866664-bf6vf" podUID="331b009d-3266-4003-85e6-8aaabc469f22" Dec 13 00:27:12.519999 containerd[1623]: time="2025-12-13T00:27:12.519869559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:27:12.868402 containerd[1623]: time="2025-12-13T00:27:12.868304768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:12.870006 containerd[1623]: time="2025-12-13T00:27:12.869939253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:27:12.870006 containerd[1623]: time="2025-12-13T00:27:12.869968019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:12.870331 kubelet[2802]: E1213 00:27:12.870222 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:27:12.870331 kubelet[2802]: E1213 00:27:12.870299 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:27:12.870658 kubelet[2802]: E1213 00:27:12.870517 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cv75f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lmfwt_calico-system(2103450f-178f-4fe4-be81-451ab8c0d111): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:12.872008 kubelet[2802]: E1213 00:27:12.871971 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmfwt" podUID="2103450f-178f-4fe4-be81-451ab8c0d111" Dec 13 00:27:13.518969 containerd[1623]: time="2025-12-13T00:27:13.518909171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:27:13.983713 containerd[1623]: time="2025-12-13T00:27:13.983637095Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:14.055273 containerd[1623]: time="2025-12-13T00:27:14.055182199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:27:14.055273 containerd[1623]: time="2025-12-13T00:27:14.055251432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:14.055599 kubelet[2802]: E1213 00:27:14.055543 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:14.056004 kubelet[2802]: E1213 00:27:14.055612 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:14.056004 kubelet[2802]: E1213 00:27:14.055779 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs77h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxl5n_calico-system(0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:14.057815 containerd[1623]: time="2025-12-13T00:27:14.057779726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:27:14.415719 containerd[1623]: time="2025-12-13T00:27:14.415619794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:14.417336 containerd[1623]: time="2025-12-13T00:27:14.417294412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:27:14.417506 containerd[1623]: time="2025-12-13T00:27:14.417434611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:14.417696 kubelet[2802]: E1213 00:27:14.417636 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:14.417757 kubelet[2802]: E1213 00:27:14.417705 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:14.417966 kubelet[2802]: E1213 00:27:14.417891 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs77h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxl5n_calico-system(0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:14.419489 kubelet[2802]: E1213 00:27:14.419437 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:27:14.449032 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:57576.service - OpenSSH per-connection server daemon (10.0.0.1:57576). Dec 13 00:27:14.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.117:22-10.0.0.1:57576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:14.450700 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 00:27:14.450805 kernel: audit: type=1130 audit(1765585634.448:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.117:22-10.0.0.1:57576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:14.518000 audit[4962]: USER_ACCT pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.520024 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 57576 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:14.522629 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:14.520000 audit[4962]: CRED_ACQ pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.528151 systemd-logind[1589]: New session 15 of user core. Dec 13 00:27:14.531198 kernel: audit: type=1101 audit(1765585634.518:799): pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.531284 kernel: audit: type=1103 audit(1765585634.520:800): pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.531320 kernel: audit: type=1006 audit(1765585634.520:801): pid=4962 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 00:27:14.534517 kernel: audit: type=1300 audit(1765585634.520:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4706ab80 a2=3 a3=0 items=0 ppid=1 pid=4962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:14.520000 audit[4962]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4706ab80 a2=3 a3=0 items=0 ppid=1 pid=4962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:14.520000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:14.542844 kernel: audit: type=1327 audit(1765585634.520:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:14.552648 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 00:27:14.556000 audit[4962]: USER_START pid=4962 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.565514 kernel: audit: type=1105 audit(1765585634.556:802): pid=4962 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.565625 kernel: audit: type=1103 audit(1765585634.558:803): pid=4966 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.558000 audit[4966]: CRED_ACQ pid=4966 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.650087 sshd[4966]: Connection closed by 10.0.0.1 port 57576 Dec 13 00:27:14.650541 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:14.651000 audit[4962]: USER_END pid=4962 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.657454 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:57576.service: Deactivated successfully. Dec 13 00:27:14.664518 kernel: audit: type=1106 audit(1765585634.651:804): pid=4962 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.664582 kernel: audit: type=1104 audit(1765585634.651:805): pid=4962 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.651000 audit[4962]: CRED_DISP pid=4962 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.661413 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 00:27:14.662864 systemd-logind[1589]: Session 15 logged out. Waiting for processes to exit. Dec 13 00:27:14.664662 systemd-logind[1589]: Removed session 15. Dec 13 00:27:14.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.117:22-10.0.0.1:57576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:16.520178 containerd[1623]: time="2025-12-13T00:27:16.520136932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:16.849673 containerd[1623]: time="2025-12-13T00:27:16.849612555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:16.851066 containerd[1623]: time="2025-12-13T00:27:16.851001013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:16.851172 containerd[1623]: time="2025-12-13T00:27:16.851047763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:16.851345 kubelet[2802]: E1213 00:27:16.851296 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:16.851768 kubelet[2802]: E1213 00:27:16.851364 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:16.851768 kubelet[2802]: E1213 00:27:16.851568 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvrp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7544896fd5-8pbbj_calico-apiserver(a727a70e-6988-4a99-89fe-438d343667ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:16.852832 kubelet[2802]: E1213 00:27:16.852780 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" podUID="a727a70e-6988-4a99-89fe-438d343667ab" Dec 13 00:27:18.519877 kubelet[2802]: E1213 00:27:18.519787 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0" Dec 13 00:27:19.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.117:22-10.0.0.1:57580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:19.669565 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:19.669598 kernel: audit: type=1130 audit(1765585639.666:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.117:22-10.0.0.1:57580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:19.667929 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:57580.service - OpenSSH per-connection server daemon (10.0.0.1:57580). Dec 13 00:27:19.729000 audit[4991]: USER_ACCT pid=4991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.730720 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 57580 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:19.733150 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:19.730000 audit[4991]: CRED_ACQ pid=4991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.738769 systemd-logind[1589]: New session 16 of user core. Dec 13 00:27:19.741017 kernel: audit: type=1101 audit(1765585639.729:808): pid=4991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.741109 kernel: audit: type=1103 audit(1765585639.730:809): pid=4991 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.741142 kernel: audit: type=1006 audit(1765585639.730:810): pid=4991 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 00:27:19.730000 audit[4991]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc7f4d0e0 a2=3 a3=0 items=0 ppid=1 pid=4991 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:19.749874 kernel: audit: type=1300 audit(1765585639.730:810): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc7f4d0e0 a2=3 a3=0 items=0 ppid=1 pid=4991 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:19.749949 kernel: audit: type=1327 audit(1765585639.730:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:19.730000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:19.750636 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 00:27:19.751000 audit[4991]: USER_START pid=4991 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.753000 audit[4995]: CRED_ACQ pid=4995 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.764687 kernel: audit: type=1105 audit(1765585639.751:811): pid=4991 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.764788 kernel: audit: type=1103 audit(1765585639.753:812): pid=4995 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.826876 sshd[4995]: Connection closed by 10.0.0.1 port 57580 Dec 13 00:27:19.827205 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:19.827000 audit[4991]: USER_END pid=4991 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.833007 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:57580.service: Deactivated successfully. Dec 13 00:27:19.835336 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 00:27:19.827000 audit[4991]: CRED_DISP pid=4991 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.837189 systemd-logind[1589]: Session 16 logged out. Waiting for processes to exit. Dec 13 00:27:19.838131 systemd-logind[1589]: Removed session 16. Dec 13 00:27:19.840669 kernel: audit: type=1106 audit(1765585639.827:813): pid=4991 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.840728 kernel: audit: type=1104 audit(1765585639.827:814): pid=4991 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.117:22-10.0.0.1:57580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:24.155439 kubelet[2802]: E1213 00:27:24.155400 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:24.525760 kubelet[2802]: E1213 00:27:24.525574 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmfwt" podUID="2103450f-178f-4fe4-be81-451ab8c0d111" Dec 13 00:27:24.840040 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:45778.service - OpenSSH per-connection server daemon (10.0.0.1:45778). Dec 13 00:27:24.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.117:22-10.0.0.1:45778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:24.842138 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:24.842238 kernel: audit: type=1130 audit(1765585644.838:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.117:22-10.0.0.1:45778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:24.912000 audit[5037]: USER_ACCT pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.913779 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 45778 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:24.916528 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:24.913000 audit[5037]: CRED_ACQ pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.928637 kernel: audit: type=1101 audit(1765585644.912:817): pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.928766 kernel: audit: type=1103 audit(1765585644.913:818): pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.928799 kernel: audit: type=1006 audit(1765585644.913:819): pid=5037 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 00:27:24.931580 systemd-logind[1589]: New session 17 of user core. Dec 13 00:27:24.913000 audit[5037]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdaecd34c0 a2=3 a3=0 items=0 ppid=1 pid=5037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:24.942416 kernel: audit: type=1300 audit(1765585644.913:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdaecd34c0 a2=3 a3=0 items=0 ppid=1 pid=5037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:24.913000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:24.951822 kernel: audit: type=1327 audit(1765585644.913:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:24.952276 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 00:27:24.958000 audit[5037]: USER_START pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.968425 kernel: audit: type=1105 audit(1765585644.958:820): pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.963000 audit[5041]: CRED_ACQ pid=5041 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.977453 kernel: audit: type=1103 audit(1765585644.963:821): pid=5041 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:25.094882 sshd[5041]: Connection closed by 10.0.0.1 port 45778 Dec 13 00:27:25.095592 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:25.095000 audit[5037]: USER_END pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:25.102338 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:45778.service: Deactivated successfully. Dec 13 00:27:25.096000 audit[5037]: CRED_DISP pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:25.105029 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 00:27:25.105996 systemd-logind[1589]: Session 17 logged out. Waiting for processes to exit. Dec 13 00:27:25.107811 systemd-logind[1589]: Removed session 17. Dec 13 00:27:25.108815 kernel: audit: type=1106 audit(1765585645.095:822): pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:25.108883 kernel: audit: type=1104 audit(1765585645.096:823): pid=5037 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:25.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.117:22-10.0.0.1:45778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:25.519143 kubelet[2802]: E1213 00:27:25.518942 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:25.521134 kubelet[2802]: E1213 00:27:25.521089 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" podUID="1cc720b4-f00a-45c7-ad27-5d1040c88fe5" Dec 13 00:27:26.521708 kubelet[2802]: E1213 00:27:26.521657 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:27:27.519821 kubelet[2802]: E1213 00:27:27.519729 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-754d866664-bf6vf" podUID="331b009d-3266-4003-85e6-8aaabc469f22" Dec 13 00:27:30.114145 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:60208.service - OpenSSH per-connection server daemon (10.0.0.1:60208). Dec 13 00:27:30.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.117:22-10.0.0.1:60208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:30.210721 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:30.210873 kernel: audit: type=1130 audit(1765585650.112:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.117:22-10.0.0.1:60208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:30.275000 audit[5059]: USER_ACCT pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.278599 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 60208 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:30.280188 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:30.277000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.291590 kernel: audit: type=1101 audit(1765585650.275:826): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.291710 kernel: audit: type=1103 audit(1765585650.277:827): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.291734 kernel: audit: type=1006 audit(1765585650.277:828): pid=5059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 13 00:27:30.295398 kernel: audit: type=1300 audit(1765585650.277:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeaffa5dc0 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:30.277000 audit[5059]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeaffa5dc0 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:30.294913 systemd-logind[1589]: New session 18 of user core. Dec 13 00:27:30.277000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:30.307079 kernel: audit: type=1327 audit(1765585650.277:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:30.324811 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 00:27:30.331000 audit[5059]: USER_START pid=5059 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.334000 audit[5063]: CRED_ACQ pid=5063 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.346840 kernel: audit: type=1105 audit(1765585650.331:829): pid=5059 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.347042 kernel: audit: type=1103 audit(1765585650.334:830): pid=5063 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.433150 sshd[5063]: Connection closed by 10.0.0.1 port 60208 Dec 13 00:27:30.433607 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:30.433000 audit[5059]: USER_END pid=5059 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.439213 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:60208.service: Deactivated successfully. Dec 13 00:27:30.441820 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 00:27:30.434000 audit[5059]: CRED_DISP pid=5059 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.443930 systemd-logind[1589]: Session 18 logged out. Waiting for processes to exit. Dec 13 00:27:30.446739 systemd-logind[1589]: Removed session 18. Dec 13 00:27:30.479457 kernel: audit: type=1106 audit(1765585650.433:831): pid=5059 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.479594 kernel: audit: type=1104 audit(1765585650.434:832): pid=5059 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.117:22-10.0.0.1:60208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:31.519305 kubelet[2802]: E1213 00:27:31.519012 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:31.520220 kubelet[2802]: E1213 00:27:31.520144 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" podUID="a727a70e-6988-4a99-89fe-438d343667ab" Dec 13 00:27:31.520533 containerd[1623]: time="2025-12-13T00:27:31.520488317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:31.884586 containerd[1623]: time="2025-12-13T00:27:31.884509738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:31.889224 containerd[1623]: time="2025-12-13T00:27:31.889150154Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:31.889616 containerd[1623]: time="2025-12-13T00:27:31.889328884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:31.889739 kubelet[2802]: E1213 00:27:31.889675 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:31.889830 kubelet[2802]: E1213 00:27:31.889747 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:31.890047 kubelet[2802]: E1213 00:27:31.889945 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2szwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7544896fd5-wwf2v_calico-apiserver(baf0c6ad-ada1-4a24-b663-b32f96db48d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:31.891262 kubelet[2802]: E1213 00:27:31.891200 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0" Dec 13 00:27:35.460921 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:60210.service - OpenSSH per-connection server daemon (10.0.0.1:60210). Dec 13 00:27:35.465297 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:35.465414 kernel: audit: type=1130 audit(1765585655.460:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.117:22-10.0.0.1:60210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:35.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.117:22-10.0.0.1:60210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:35.519924 kubelet[2802]: E1213 00:27:35.518487 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:35.539000 audit[5076]: USER_ACCT pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.541432 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 60210 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:35.543803 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:35.548939 systemd-logind[1589]: New session 19 of user core. Dec 13 00:27:35.541000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.561176 kernel: audit: type=1101 audit(1765585655.539:835): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.561317 kernel: audit: type=1103 audit(1765585655.541:836): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.561351 kernel: audit: type=1006 audit(1765585655.541:837): pid=5076 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 13 00:27:35.564817 kernel: audit: type=1300 audit(1765585655.541:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff760839e0 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:35.541000 audit[5076]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff760839e0 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:35.541000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:35.574201 kernel: audit: type=1327 audit(1765585655.541:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:35.577761 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 00:27:35.579000 audit[5076]: USER_START pid=5076 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.581000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.631236 kernel: audit: type=1105 audit(1765585655.579:838): pid=5076 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.631820 kernel: audit: type=1103 audit(1765585655.581:839): pid=5080 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.805529 sshd[5080]: Connection closed by 10.0.0.1 port 60210 Dec 13 00:27:35.805994 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:35.806000 audit[5076]: USER_END pid=5076 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.806000 audit[5076]: CRED_DISP pid=5076 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.820772 kernel: audit: type=1106 audit(1765585655.806:840): pid=5076 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.820902 kernel: audit: type=1104 audit(1765585655.806:841): pid=5076 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.831257 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:60210.service: Deactivated successfully. Dec 13 00:27:35.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.117:22-10.0.0.1:60210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:35.833783 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 00:27:35.834803 systemd-logind[1589]: Session 19 logged out. Waiting for processes to exit. Dec 13 00:27:35.838248 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:60218.service - OpenSSH per-connection server daemon (10.0.0.1:60218). Dec 13 00:27:35.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.117:22-10.0.0.1:60218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:35.839031 systemd-logind[1589]: Removed session 19. Dec 13 00:27:35.979000 audit[5099]: USER_ACCT pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.980869 sshd[5099]: Accepted publickey for core from 10.0.0.1 port 60218 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:35.980000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.980000 audit[5099]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2414f180 a2=3 a3=0 items=0 ppid=1 pid=5099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:35.980000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:35.983673 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:35.990358 systemd-logind[1589]: New session 20 of user core. Dec 13 00:27:35.994569 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 00:27:35.996000 audit[5099]: USER_START pid=5099 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:35.998000 audit[5104]: CRED_ACQ pid=5104 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:36.521597 containerd[1623]: time="2025-12-13T00:27:36.521539549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:27:37.266190 containerd[1623]: time="2025-12-13T00:27:37.266125861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:37.345396 containerd[1623]: time="2025-12-13T00:27:37.345279891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:27:37.345538 containerd[1623]: time="2025-12-13T00:27:37.345304317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:37.345820 kubelet[2802]: E1213 00:27:37.345684 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:27:37.345820 kubelet[2802]: E1213 00:27:37.345749 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:27:37.346461 kubelet[2802]: E1213 00:27:37.345907 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cv75f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lmfwt_calico-system(2103450f-178f-4fe4-be81-451ab8c0d111): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:37.347457 kubelet[2802]: E1213 00:27:37.347428 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmfwt" podUID="2103450f-178f-4fe4-be81-451ab8c0d111" Dec 13 00:27:37.469329 sshd[5104]: Connection closed by 10.0.0.1 port 60218 Dec 13 00:27:37.469681 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:37.469000 audit[5099]: USER_END pid=5099 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:37.470000 audit[5099]: CRED_DISP pid=5099 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:37.479930 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:60218.service: Deactivated successfully. Dec 13 00:27:37.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.117:22-10.0.0.1:60218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:37.482872 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 00:27:37.483828 systemd-logind[1589]: Session 20 logged out. Waiting for processes to exit. Dec 13 00:27:37.487257 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:60224.service - OpenSSH per-connection server daemon (10.0.0.1:60224). Dec 13 00:27:37.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.117:22-10.0.0.1:60224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:37.488033 systemd-logind[1589]: Removed session 20. Dec 13 00:27:37.520122 containerd[1623]: time="2025-12-13T00:27:37.519969438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:27:37.553000 audit[5116]: USER_ACCT pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:37.554776 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 60224 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:37.554000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:37.554000 audit[5116]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc45e60f60 a2=3 a3=0 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:37.554000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:37.557505 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:37.563409 systemd-logind[1589]: New session 21 of user core. Dec 13 00:27:37.571625 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 00:27:37.572000 audit[5116]: USER_START pid=5116 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:37.575000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:37.994077 containerd[1623]: time="2025-12-13T00:27:37.993989742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:38.026770 containerd[1623]: time="2025-12-13T00:27:38.026672001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:27:38.026919 containerd[1623]: time="2025-12-13T00:27:38.026757413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:38.027040 kubelet[2802]: E1213 00:27:38.026953 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:38.027040 kubelet[2802]: E1213 00:27:38.027020 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:38.027337 kubelet[2802]: E1213 00:27:38.027267 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs77h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxl5n_calico-system(0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:38.029179 containerd[1623]: time="2025-12-13T00:27:38.029142930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:27:38.346912 containerd[1623]: time="2025-12-13T00:27:38.346835680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:38.516785 containerd[1623]: time="2025-12-13T00:27:38.516260886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:27:38.516965 containerd[1623]: time="2025-12-13T00:27:38.516308987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:38.517108 kubelet[2802]: E1213 00:27:38.517056 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:38.517707 kubelet[2802]: E1213 00:27:38.517120 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:38.517707 kubelet[2802]: E1213 00:27:38.517272 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs77h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hxl5n_calico-system(0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:38.518371 kubelet[2802]: E1213 00:27:38.518334 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:38.518371 kubelet[2802]: E1213 00:27:38.518450 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:27:39.301000 audit[5150]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:39.301000 audit[5150]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd7b4577d0 a2=0 a3=7ffd7b4577bc items=0 ppid=2915 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:39.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:39.313000 audit[5150]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:39.313000 audit[5150]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd7b4577d0 a2=0 a3=0 items=0 ppid=2915 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:39.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:39.325201 sshd[5120]: Connection closed by 10.0.0.1 port 60224 Dec 13 00:27:39.325882 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:39.327000 audit[5116]: USER_END pid=5116 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:39.328000 audit[5116]: CRED_DISP pid=5116 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:39.337000 audit[5153]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:39.337000 audit[5153]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd603a5b90 a2=0 a3=7ffd603a5b7c items=0 ppid=2915 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:39.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:39.341529 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:60224.service: Deactivated successfully. Dec 13 00:27:39.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.117:22-10.0.0.1:60224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:39.344652 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 00:27:39.343000 audit[5153]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:39.343000 audit[5153]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd603a5b90 a2=0 a3=0 items=0 ppid=2915 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:39.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:39.346738 systemd-logind[1589]: Session 21 logged out. Waiting for processes to exit. Dec 13 00:27:39.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.117:22-10.0.0.1:60230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:39.351722 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:60230.service - OpenSSH per-connection server daemon (10.0.0.1:60230). Dec 13 00:27:39.353929 systemd-logind[1589]: Removed session 21. Dec 13 00:27:39.448000 audit[5157]: USER_ACCT pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:39.450657 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 60230 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:39.451000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:39.451000 audit[5157]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3f0cbfe0 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:39.451000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:39.453700 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:39.459238 systemd-logind[1589]: New session 22 of user core. Dec 13 00:27:39.468675 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 00:27:39.470000 audit[5157]: USER_START pid=5157 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:39.473000 audit[5161]: CRED_ACQ pid=5161 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:39.918517 sshd[5161]: Connection closed by 10.0.0.1 port 60230 Dec 13 00:27:39.918763 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:39.920000 audit[5157]: USER_END pid=5157 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:39.920000 audit[5157]: CRED_DISP pid=5157 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:39.930988 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:60230.service: Deactivated successfully. Dec 13 00:27:39.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.117:22-10.0.0.1:60230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:39.933263 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 00:27:39.936540 systemd-logind[1589]: Session 22 logged out. Waiting for processes to exit. Dec 13 00:27:39.937724 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:60246.service - OpenSSH per-connection server daemon (10.0.0.1:60246). Dec 13 00:27:39.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.117:22-10.0.0.1:60246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:39.939457 systemd-logind[1589]: Removed session 22. Dec 13 00:27:40.001000 audit[5172]: USER_ACCT pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:40.002654 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 60246 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:40.002000 audit[5172]: CRED_ACQ pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:40.002000 audit[5172]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa819fa70 a2=3 a3=0 items=0 ppid=1 pid=5172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:40.002000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:40.010244 systemd-logind[1589]: New session 23 of user core. Dec 13 00:27:40.005299 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:40.022657 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 00:27:40.039000 audit[5172]: USER_START pid=5172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:40.041000 audit[5176]: CRED_ACQ pid=5176 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:40.196545 sshd[5176]: Connection closed by 10.0.0.1 port 60246 Dec 13 00:27:40.196759 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:40.196000 audit[5172]: USER_END pid=5172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:40.197000 audit[5172]: CRED_DISP pid=5172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:40.201254 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:60246.service: Deactivated successfully. Dec 13 00:27:40.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.117:22-10.0.0.1:60246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:40.203580 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 00:27:40.205495 systemd-logind[1589]: Session 23 logged out. Waiting for processes to exit. Dec 13 00:27:40.206843 systemd-logind[1589]: Removed session 23. Dec 13 00:27:40.520849 containerd[1623]: time="2025-12-13T00:27:40.520429806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:27:40.893840 containerd[1623]: time="2025-12-13T00:27:40.893795215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:40.924267 containerd[1623]: time="2025-12-13T00:27:40.924173608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:27:40.924452 containerd[1623]: time="2025-12-13T00:27:40.924279408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:40.924587 kubelet[2802]: E1213 00:27:40.924529 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:27:40.925085 kubelet[2802]: E1213 00:27:40.924605 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:27:40.925085 kubelet[2802]: E1213 00:27:40.924796 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twjcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-679b4bb9cf-2wh8x_calico-system(1cc720b4-f00a-45c7-ad27-5d1040c88fe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:40.926086 kubelet[2802]: E1213 00:27:40.926034 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" podUID="1cc720b4-f00a-45c7-ad27-5d1040c88fe5" Dec 13 00:27:42.520572 containerd[1623]: time="2025-12-13T00:27:42.520429578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:27:42.882850 containerd[1623]: time="2025-12-13T00:27:42.882760141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:42.885097 containerd[1623]: time="2025-12-13T00:27:42.885041156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:27:42.885155 containerd[1623]: time="2025-12-13T00:27:42.885102642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:42.885363 kubelet[2802]: E1213 00:27:42.885306 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:27:42.885779 kubelet[2802]: E1213 00:27:42.885368 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:27:42.885779 kubelet[2802]: E1213 00:27:42.885510 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9c52d5ff733e47bb827dcc82f1f2eefb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdkdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-754d866664-bf6vf_calico-system(331b009d-3266-4003-85e6-8aaabc469f22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:42.887515 containerd[1623]: time="2025-12-13T00:27:42.887447268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:27:43.183722 containerd[1623]: time="2025-12-13T00:27:43.183564105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:43.188756 containerd[1623]: time="2025-12-13T00:27:43.188694981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:27:43.188842 containerd[1623]: time="2025-12-13T00:27:43.188763761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:43.189195 kubelet[2802]: E1213 00:27:43.189059 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:27:43.189195 kubelet[2802]: E1213 00:27:43.189167 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:27:43.189468 kubelet[2802]: E1213 00:27:43.189364 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdkdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-754d866664-bf6vf_calico-system(331b009d-3266-4003-85e6-8aaabc469f22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:43.190713 kubelet[2802]: E1213 00:27:43.190667 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-754d866664-bf6vf" podUID="331b009d-3266-4003-85e6-8aaabc469f22" Dec 13 00:27:43.519573 kubelet[2802]: E1213 00:27:43.519414 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0" Dec 13 00:27:43.520344 containerd[1623]: time="2025-12-13T00:27:43.520042257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:43.853848 containerd[1623]: time="2025-12-13T00:27:43.853784784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:43.944749 containerd[1623]: time="2025-12-13T00:27:43.944658000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:43.944908 containerd[1623]: time="2025-12-13T00:27:43.944789660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:43.945110 kubelet[2802]: E1213 00:27:43.945035 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:43.945521 kubelet[2802]: E1213 00:27:43.945114 2802 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:43.945521 kubelet[2802]: E1213 00:27:43.945302 2802 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvrp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7544896fd5-8pbbj_calico-apiserver(a727a70e-6988-4a99-89fe-438d343667ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:43.947004 kubelet[2802]: E1213 00:27:43.946922 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" podUID="a727a70e-6988-4a99-89fe-438d343667ab" Dec 13 00:27:45.210782 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:48344.service - OpenSSH per-connection server daemon (10.0.0.1:48344). Dec 13 00:27:45.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.117:22-10.0.0.1:48344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:45.223422 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 00:27:45.223570 kernel: audit: type=1130 audit(1765585665.210:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.117:22-10.0.0.1:48344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:45.287000 audit[5190]: USER_ACCT pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.287999 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 48344 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:45.290536 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:45.288000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.296316 systemd-logind[1589]: New session 24 of user core. Dec 13 00:27:45.298776 kernel: audit: type=1101 audit(1765585665.287:884): pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.298816 kernel: audit: type=1103 audit(1765585665.288:885): pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.298837 kernel: audit: type=1006 audit(1765585665.289:886): pid=5190 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 00:27:45.289000 audit[5190]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc299e8cb0 a2=3 a3=0 items=0 ppid=1 pid=5190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:45.309005 kernel: audit: type=1300 audit(1765585665.289:886): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc299e8cb0 a2=3 a3=0 items=0 ppid=1 pid=5190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:45.309059 kernel: audit: type=1327 audit(1765585665.289:886): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:45.289000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:45.312580 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 00:27:45.314000 audit[5190]: USER_START pid=5190 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.316000 audit[5194]: CRED_ACQ pid=5194 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.326219 kernel: audit: type=1105 audit(1765585665.314:887): pid=5190 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.326286 kernel: audit: type=1103 audit(1765585665.316:888): pid=5194 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.383492 sshd[5194]: Connection closed by 10.0.0.1 port 48344 Dec 13 00:27:45.383815 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:45.385000 audit[5190]: USER_END pid=5190 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.388706 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:48344.service: Deactivated successfully. Dec 13 00:27:45.391007 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 00:27:45.392473 systemd-logind[1589]: Session 24 logged out. Waiting for processes to exit. Dec 13 00:27:45.394347 systemd-logind[1589]: Removed session 24. Dec 13 00:27:45.385000 audit[5190]: CRED_DISP pid=5190 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.405011 kernel: audit: type=1106 audit(1765585665.385:889): pid=5190 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.405065 kernel: audit: type=1104 audit(1765585665.385:890): pid=5190 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:45.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.117:22-10.0.0.1:48344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:47.401000 audit[5207]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:47.401000 audit[5207]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe338aa690 a2=0 a3=7ffe338aa67c items=0 ppid=2915 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:47.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:47.412000 audit[5207]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:47.412000 audit[5207]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe338aa690 a2=0 a3=7ffe338aa67c items=0 ppid=2915 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:47.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:50.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.117:22-10.0.0.1:44924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:50.404932 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:44924.service - OpenSSH per-connection server daemon (10.0.0.1:44924). Dec 13 00:27:50.407418 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 00:27:50.407539 kernel: audit: type=1130 audit(1765585670.404:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.117:22-10.0.0.1:44924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:50.465000 audit[5209]: USER_ACCT pid=5209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.466225 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 44924 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:50.468562 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:50.473159 systemd-logind[1589]: New session 25 of user core. Dec 13 00:27:50.466000 audit[5209]: CRED_ACQ pid=5209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.490088 kernel: audit: type=1101 audit(1765585670.465:895): pid=5209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.490192 kernel: audit: type=1103 audit(1765585670.466:896): pid=5209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.490210 kernel: audit: type=1006 audit(1765585670.467:897): pid=5209 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 00:27:50.493203 kernel: audit: type=1300 audit(1765585670.467:897): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc807d2b30 a2=3 a3=0 items=0 ppid=1 pid=5209 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:50.467000 audit[5209]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc807d2b30 a2=3 a3=0 items=0 ppid=1 pid=5209 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:50.499129 kernel: audit: type=1327 audit(1765585670.467:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:50.467000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:50.503752 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 00:27:50.506000 audit[5209]: USER_START pid=5209 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.509000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.522831 kernel: audit: type=1105 audit(1765585670.506:898): pid=5209 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.522959 kernel: audit: type=1103 audit(1765585670.509:899): pid=5213 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.587246 sshd[5213]: Connection closed by 10.0.0.1 port 44924 Dec 13 00:27:50.589598 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:50.592000 audit[5209]: USER_END pid=5209 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.597196 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:44924.service: Deactivated successfully. Dec 13 00:27:50.601412 kernel: audit: type=1106 audit(1765585670.592:900): pid=5209 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.601305 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 00:27:50.593000 audit[5209]: CRED_DISP pid=5209 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:50.603432 systemd-logind[1589]: Session 25 logged out. Waiting for processes to exit. Dec 13 00:27:50.605202 systemd-logind[1589]: Removed session 25. Dec 13 00:27:50.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.117:22-10.0.0.1:44924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:50.607436 kernel: audit: type=1104 audit(1765585670.593:901): pid=5209 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:51.518737 kubelet[2802]: E1213 00:27:51.518259 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:52.519232 kubelet[2802]: E1213 00:27:52.519159 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmfwt" podUID="2103450f-178f-4fe4-be81-451ab8c0d111" Dec 13 00:27:52.519906 kubelet[2802]: E1213 00:27:52.519686 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hxl5n" podUID="0f1e7d1a-39ba-40d7-9b6b-6f10c141ab77" Dec 13 00:27:53.519524 kubelet[2802]: E1213 00:27:53.519465 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-679b4bb9cf-2wh8x" podUID="1cc720b4-f00a-45c7-ad27-5d1040c88fe5" Dec 13 00:27:55.519894 kubelet[2802]: E1213 00:27:55.519776 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-8pbbj" podUID="a727a70e-6988-4a99-89fe-438d343667ab" Dec 13 00:27:55.520762 kubelet[2802]: E1213 00:27:55.520699 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-754d866664-bf6vf" podUID="331b009d-3266-4003-85e6-8aaabc469f22" Dec 13 00:27:55.601436 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:44936.service - OpenSSH per-connection server daemon (10.0.0.1:44936). Dec 13 00:27:55.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.117:22-10.0.0.1:44936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:55.603635 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:55.603735 kernel: audit: type=1130 audit(1765585675.601:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.117:22-10.0.0.1:44936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:55.673000 audit[5253]: USER_ACCT pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.674776 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 44936 ssh2: RSA SHA256:jpTbqtmFYp+EndkJd2f6JVorlhwThjwnhAV1OnPrON4 Dec 13 00:27:55.677298 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:55.675000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.683623 systemd-logind[1589]: New session 26 of user core. Dec 13 00:27:55.685076 kernel: audit: type=1101 audit(1765585675.673:904): pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.685158 kernel: audit: type=1103 audit(1765585675.675:905): pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.685193 kernel: audit: type=1006 audit(1765585675.675:906): pid=5253 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 00:27:55.688102 kernel: audit: type=1300 audit(1765585675.675:906): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef05c1900 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:55.675000 audit[5253]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef05c1900 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:55.675000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:55.695212 kernel: audit: type=1327 audit(1765585675.675:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:55.699660 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 00:27:55.702000 audit[5253]: USER_START pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.709000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.715759 kernel: audit: type=1105 audit(1765585675.702:907): pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.715883 kernel: audit: type=1103 audit(1765585675.709:908): pid=5257 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.878129 sshd[5257]: Connection closed by 10.0.0.1 port 44936 Dec 13 00:27:55.878472 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:55.879000 audit[5253]: USER_END pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.884693 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:44936.service: Deactivated successfully. Dec 13 00:27:55.879000 audit[5253]: CRED_DISP pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.891590 kernel: audit: type=1106 audit(1765585675.879:909): pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.891645 kernel: audit: type=1104 audit(1765585675.879:910): pid=5253 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:55.888589 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 00:27:55.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.117:22-10.0.0.1:44936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:55.889958 systemd-logind[1589]: Session 26 logged out. Waiting for processes to exit. Dec 13 00:27:55.891714 systemd-logind[1589]: Removed session 26. Dec 13 00:27:57.519619 kubelet[2802]: E1213 00:27:57.519571 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7544896fd5-wwf2v" podUID="baf0c6ad-ada1-4a24-b663-b32f96db48d0"