Dec 12 18:23:36.155470 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 12 18:23:36.155494 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:23:36.155502 kernel: BIOS-provided physical RAM map: Dec 12 18:23:36.155509 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 12 18:23:36.155515 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 12 18:23:36.155521 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:23:36.155531 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 12 18:23:36.155538 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 12 18:23:36.155544 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:23:36.155550 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:23:36.155556 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:23:36.155563 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:23:36.155569 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 12 18:23:36.155575 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:23:36.155585 kernel: NX (Execute Disable) protection: active Dec 12 18:23:36.155592 kernel: APIC: Static calls initialized Dec 12 18:23:36.155598 kernel: SMBIOS 2.8 present. Dec 12 18:23:36.155605 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 12 18:23:36.155612 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:23:36.155620 kernel: Hypervisor detected: KVM Dec 12 18:23:36.155627 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:23:36.155633 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:23:36.155640 kernel: kvm-clock: using sched offset of 6344782234 cycles Dec 12 18:23:36.155647 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:23:36.155654 kernel: tsc: Detected 1999.999 MHz processor Dec 12 18:23:36.155661 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:23:36.155669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:23:36.155678 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 12 18:23:36.155685 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:23:36.155692 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:23:36.155699 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:23:36.155706 kernel: Using GB pages for direct mapping Dec 12 18:23:36.155713 kernel: ACPI: Early table checksum verification disabled Dec 12 18:23:36.155719 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 12 18:23:36.155726 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:23:36.155736 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:23:36.155743 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:23:36.155750 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:23:36.155757 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:23:36.155764 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:23:36.155774 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:23:36.155783 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:23:36.155791 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 12 18:23:36.155798 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 12 18:23:36.155805 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:23:36.155812 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 12 18:23:36.155821 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 12 18:23:36.155829 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 12 18:23:36.155836 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 12 18:23:36.155843 kernel: No NUMA configuration found Dec 12 18:23:36.155850 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 12 18:23:36.155857 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Dec 12 18:23:36.155864 kernel: Zone ranges: Dec 12 18:23:36.155885 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:23:36.155894 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:23:36.155902 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:23:36.155909 kernel: Device empty Dec 12 18:23:36.155916 kernel: Movable zone start for each node Dec 12 18:23:36.155923 kernel: Early memory node ranges Dec 12 18:23:36.155930 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:23:36.155937 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 12 18:23:36.155946 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:23:36.155954 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 12 18:23:36.155961 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:23:36.155968 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:23:36.155975 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 12 18:23:36.155983 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:23:36.155990 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:23:36.155997 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:23:36.156006 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:23:36.156014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:23:36.156021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:23:36.156028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:23:36.156035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:23:36.156042 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:23:36.156050 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:23:36.156059 kernel: TSC deadline timer available Dec 12 18:23:36.156066 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:23:36.156073 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:23:36.156080 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:23:36.156087 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:23:36.156094 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:23:36.156101 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:23:36.156108 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:23:36.156117 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:23:36.156124 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:23:36.156131 kernel: kvm-guest: setup PV sched yield Dec 12 18:23:36.156139 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:23:36.156146 kernel: Booting paravirtualized kernel on KVM Dec 12 18:23:36.156153 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:23:36.156161 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:23:36.156170 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:23:36.156177 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:23:36.156184 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:23:36.156191 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:23:36.156198 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:23:36.156207 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:23:36.156214 kernel: random: crng init done Dec 12 18:23:36.156224 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:23:36.156231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:23:36.156238 kernel: Fallback order for Node 0: 0 Dec 12 18:23:36.156246 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 12 18:23:36.156253 kernel: Policy zone: Normal Dec 12 18:23:36.156260 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:23:36.156267 kernel: software IO TLB: area num 2. Dec 12 18:23:36.156276 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:23:36.156283 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:23:36.156291 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:23:36.156298 kernel: Dynamic Preempt: voluntary Dec 12 18:23:36.156305 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:23:36.156313 kernel: rcu: RCU event tracing is enabled. Dec 12 18:23:36.156320 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:23:36.156329 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:23:36.156337 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:23:36.156344 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:23:36.156351 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:23:36.156358 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:23:36.156365 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:23:36.156382 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:23:36.156389 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:23:36.156397 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:23:36.156404 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:23:36.156414 kernel: Console: colour VGA+ 80x25 Dec 12 18:23:36.156421 kernel: printk: legacy console [tty0] enabled Dec 12 18:23:36.156429 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:23:36.156436 kernel: ACPI: Core revision 20240827 Dec 12 18:23:36.156446 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:23:36.156454 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:23:36.156461 kernel: x2apic enabled Dec 12 18:23:36.156469 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:23:36.156477 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:23:36.156484 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:23:36.156492 kernel: kvm-guest: setup PV IPIs Dec 12 18:23:36.156501 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:23:36.156509 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Dec 12 18:23:36.156516 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Dec 12 18:23:36.156524 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:23:36.156531 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:23:36.156539 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:23:36.156547 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:23:36.156556 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:23:36.156564 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:23:36.156571 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:23:36.156579 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:23:36.156586 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:23:36.156594 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:23:36.156602 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:23:36.156612 kernel: active return thunk: srso_alias_return_thunk Dec 12 18:23:36.156619 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:23:36.156627 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 12 18:23:36.156634 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:23:36.156642 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:23:36.156649 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:23:36.156657 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:23:36.156666 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:23:36.156674 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:23:36.156681 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 12 18:23:36.156689 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 12 18:23:36.156697 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:23:36.156704 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:23:36.156712 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:23:36.156754 kernel: landlock: Up and running. Dec 12 18:23:36.156762 kernel: SELinux: Initializing. Dec 12 18:23:36.156770 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:23:36.157163 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:23:36.157183 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 12 18:23:36.157190 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:23:36.157198 kernel: ... version: 0 Dec 12 18:23:36.157209 kernel: ... bit width: 48 Dec 12 18:23:36.157217 kernel: ... generic registers: 6 Dec 12 18:23:36.157225 kernel: ... value mask: 0000ffffffffffff Dec 12 18:23:36.157232 kernel: ... max period: 00007fffffffffff Dec 12 18:23:36.157240 kernel: ... fixed-purpose events: 0 Dec 12 18:23:36.157247 kernel: ... event mask: 000000000000003f Dec 12 18:23:36.157255 kernel: signal: max sigframe size: 3376 Dec 12 18:23:36.157264 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:23:36.157272 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:23:36.157280 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:23:36.157288 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:23:36.157295 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:23:36.157303 kernel: .... node #0, CPUs: #1 Dec 12 18:23:36.157311 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:23:36.157320 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Dec 12 18:23:36.157328 kernel: Memory: 3980240K/4193772K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 208856K reserved, 0K cma-reserved) Dec 12 18:23:36.157336 kernel: devtmpfs: initialized Dec 12 18:23:36.157344 kernel: x86/mm: Memory block size: 128MB Dec 12 18:23:36.157352 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:23:36.157359 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:23:36.157367 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:23:36.157376 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:23:36.157384 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:23:36.157392 kernel: audit: type=2000 audit(1765563813.158:1): state=initialized audit_enabled=0 res=1 Dec 12 18:23:36.157399 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:23:36.157407 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:23:36.157414 kernel: cpuidle: using governor menu Dec 12 18:23:36.157796 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:23:36.157805 kernel: dca service started, version 1.12.1 Dec 12 18:23:36.157816 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:23:36.157824 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:23:36.157832 kernel: PCI: Using configuration type 1 for base access Dec 12 18:23:36.157840 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:23:36.157847 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:23:36.157855 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:23:36.157862 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:23:36.157898 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:23:36.157906 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:23:36.157913 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:23:36.157921 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:23:36.157928 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:23:36.157936 kernel: ACPI: Interpreter enabled Dec 12 18:23:36.157943 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:23:36.157954 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:23:36.157961 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:23:36.157969 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:23:36.157977 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:23:36.157984 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:23:36.158234 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:23:36.158428 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:23:36.158609 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:23:36.158620 kernel: PCI host bridge to bus 0000:00 Dec 12 18:23:36.158799 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:23:36.159003 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:23:36.159170 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:23:36.159337 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:23:36.159498 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:23:36.159659 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 12 18:23:36.159819 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:23:36.160086 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:23:36.160527 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:23:36.160712 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:23:36.161511 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:23:36.161706 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:23:36.161945 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:23:36.162140 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:23:36.162323 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:23:36.162498 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:23:36.162716 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:23:36.163027 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:23:36.163307 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:23:36.163764 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:23:36.165087 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:23:36.165391 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:23:36.165668 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:23:36.165952 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:23:36.166229 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:23:36.166499 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 12 18:23:36.168839 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:23:36.169150 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:23:36.169414 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:23:36.169434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:23:36.169448 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:23:36.169468 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:23:36.169481 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:23:36.169494 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:23:36.169508 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:23:36.169521 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:23:36.169534 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:23:36.169547 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:23:36.169564 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:23:36.169577 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:23:36.169590 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:23:36.169604 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:23:36.169616 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:23:36.169630 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:23:36.169643 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:23:36.169660 kernel: iommu: Default domain type: Translated Dec 12 18:23:36.169673 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:23:36.169685 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:23:36.169698 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:23:36.169711 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 12 18:23:36.169724 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 12 18:23:36.170001 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:23:36.170286 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:23:36.170551 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:23:36.170572 kernel: vgaarb: loaded Dec 12 18:23:36.170588 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:23:36.170603 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:23:36.170617 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:23:36.170631 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:23:36.170653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:23:36.170667 kernel: pnp: PnP ACPI init Dec 12 18:23:36.177187 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:23:36.177218 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:23:36.177232 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:23:36.177246 kernel: NET: Registered PF_INET protocol family Dec 12 18:23:36.177268 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:23:36.177282 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:23:36.177295 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:23:36.177309 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:23:36.177323 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:23:36.177336 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:23:36.177349 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:23:36.177367 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:23:36.177380 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:23:36.177394 kernel: NET: Registered PF_XDP protocol family Dec 12 18:23:36.177655 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:23:36.177925 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:23:36.178169 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:23:36.178409 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:23:36.178645 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:23:36.180917 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 12 18:23:36.180941 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:23:36.180955 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:23:36.180968 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 12 18:23:36.180981 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Dec 12 18:23:36.180994 kernel: Initialise system trusted keyrings Dec 12 18:23:36.181012 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:23:36.181025 kernel: Key type asymmetric registered Dec 12 18:23:36.181038 kernel: Asymmetric key parser 'x509' registered Dec 12 18:23:36.181050 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:23:36.181063 kernel: io scheduler mq-deadline registered Dec 12 18:23:36.181076 kernel: io scheduler kyber registered Dec 12 18:23:36.181089 kernel: io scheduler bfq registered Dec 12 18:23:36.181105 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:23:36.181119 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:23:36.181133 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:23:36.181146 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:23:36.181159 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:23:36.181172 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:23:36.181184 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:23:36.181200 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:23:36.181474 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:23:36.181492 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:23:36.181731 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:23:36.181990 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:23:34 UTC (1765563814) Dec 12 18:23:36.182236 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:23:36.182258 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:23:36.182272 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:23:36.182285 kernel: Segment Routing with IPv6 Dec 12 18:23:36.182298 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:23:36.182311 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:23:36.182324 kernel: Key type dns_resolver registered Dec 12 18:23:36.182336 kernel: IPI shorthand broadcast: enabled Dec 12 18:23:36.182349 kernel: sched_clock: Marking stable (1883004323, 359518575)->(2337947688, -95424790) Dec 12 18:23:36.182365 kernel: registered taskstats version 1 Dec 12 18:23:36.182378 kernel: Loading compiled-in X.509 certificates Dec 12 18:23:36.182391 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 12 18:23:36.182404 kernel: Demotion targets for Node 0: null Dec 12 18:23:36.182417 kernel: Key type .fscrypt registered Dec 12 18:23:36.182430 kernel: Key type fscrypt-provisioning registered Dec 12 18:23:36.182442 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:23:36.182458 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:23:36.182472 kernel: ima: No architecture policies found Dec 12 18:23:36.182484 kernel: clk: Disabling unused clocks Dec 12 18:23:36.182497 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 12 18:23:36.182509 kernel: Write protecting the kernel read-only data: 45056k Dec 12 18:23:36.182522 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 12 18:23:36.182535 kernel: Run /init as init process Dec 12 18:23:36.182551 kernel: with arguments: Dec 12 18:23:36.182564 kernel: /init Dec 12 18:23:36.182576 kernel: with environment: Dec 12 18:23:36.182589 kernel: HOME=/ Dec 12 18:23:36.182625 kernel: TERM=linux Dec 12 18:23:36.182641 kernel: SCSI subsystem initialized Dec 12 18:23:36.182654 kernel: libata version 3.00 loaded. Dec 12 18:23:36.184274 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:23:36.184299 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:23:36.184553 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:23:36.184802 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:23:36.185266 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:23:36.185553 kernel: scsi host0: ahci Dec 12 18:23:36.185832 kernel: scsi host1: ahci Dec 12 18:23:36.186611 kernel: scsi host2: ahci Dec 12 18:23:36.188234 kernel: scsi host3: ahci Dec 12 18:23:36.188515 kernel: scsi host4: ahci Dec 12 18:23:36.188784 kernel: scsi host5: ahci Dec 12 18:23:36.188809 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Dec 12 18:23:36.188823 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Dec 12 18:23:36.188838 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Dec 12 18:23:36.188852 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Dec 12 18:23:36.188865 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Dec 12 18:23:36.189902 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Dec 12 18:23:36.189918 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:23:36.189938 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:23:36.189951 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:23:36.189964 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:23:36.189978 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:23:36.189991 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:23:36.190302 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 12 18:23:36.190578 kernel: scsi host6: Virtio SCSI HBA Dec 12 18:23:36.190869 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 18:23:36.191171 kernel: sd 6:0:0:0: Power-on or device reset occurred Dec 12 18:23:36.191444 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 12 18:23:36.191714 kernel: sd 6:0:0:0: [sda] Write Protect is off Dec 12 18:23:36.193836 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 12 18:23:36.194093 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:23:36.194108 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:23:36.194118 kernel: GPT:25804799 != 167739391 Dec 12 18:23:36.194130 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:23:36.194138 kernel: GPT:25804799 != 167739391 Dec 12 18:23:36.194147 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:23:36.194155 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:23:36.194355 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Dec 12 18:23:36.194367 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:23:36.194376 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:23:36.194384 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:23:36.194392 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 12 18:23:36.194401 kernel: raid6: avx2x4 gen() 24679 MB/s Dec 12 18:23:36.194412 kernel: raid6: avx2x2 gen() 23087 MB/s Dec 12 18:23:36.194422 kernel: raid6: avx2x1 gen() 14097 MB/s Dec 12 18:23:36.194430 kernel: raid6: using algorithm avx2x4 gen() 24679 MB/s Dec 12 18:23:36.194438 kernel: raid6: .... xor() 3183 MB/s, rmw enabled Dec 12 18:23:36.194446 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:23:36.194457 kernel: xor: automatically using best checksumming function avx Dec 12 18:23:36.194466 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:23:36.194474 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (166) Dec 12 18:23:36.194483 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 12 18:23:36.194491 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:23:36.194499 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:23:36.194507 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:23:36.194518 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:23:36.194526 kernel: loop: module loaded Dec 12 18:23:36.194534 kernel: loop0: detected capacity change from 0 to 100136 Dec 12 18:23:36.194543 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:23:36.194552 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:23:36.194564 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:23:36.194576 systemd[1]: Detected virtualization kvm. Dec 12 18:23:36.194585 systemd[1]: Detected architecture x86-64. Dec 12 18:23:36.194593 systemd[1]: Running in initrd. Dec 12 18:23:36.194602 systemd[1]: No hostname configured, using default hostname. Dec 12 18:23:36.194611 systemd[1]: Hostname set to . Dec 12 18:23:36.194620 systemd[1]: Initializing machine ID from random generator. Dec 12 18:23:36.194631 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:23:36.194640 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:23:36.194648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:23:36.194657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:23:36.194667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:23:36.194676 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:23:36.194688 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:23:36.194697 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:23:36.194705 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:23:36.194996 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:23:36.195008 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:23:36.195017 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:23:36.199327 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:23:36.199345 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:23:36.199355 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:23:36.199364 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:23:36.199373 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:23:36.199383 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:23:36.199392 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:23:36.199405 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:23:36.199414 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:23:36.199423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:23:36.199433 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:23:36.199441 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:23:36.199450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:23:36.199461 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:23:36.199472 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:23:36.199481 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:23:36.199491 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:23:36.199499 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:23:36.199508 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:23:36.199517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:23:36.199528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:23:36.199537 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:23:36.199546 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:23:36.199588 systemd-journald[304]: Collecting audit messages is enabled. Dec 12 18:23:36.199613 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:23:36.199624 kernel: audit: type=1130 audit(1765563816.159:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.199633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:23:36.199645 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:23:36.199653 kernel: Bridge firewalling registered Dec 12 18:23:36.199663 systemd-journald[304]: Journal started Dec 12 18:23:36.199681 systemd-journald[304]: Runtime Journal (/run/log/journal/d4ed3765f28f48678ca67f8984a06b66) is 8M, max 78.1M, 70.1M free. Dec 12 18:23:36.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.203894 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:23:36.203154 systemd-modules-load[306]: Inserted module 'br_netfilter' Dec 12 18:23:36.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.213903 kernel: audit: type=1130 audit(1765563816.204:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.217055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:23:36.314558 kernel: audit: type=1130 audit(1765563816.216:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.229016 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:23:36.321835 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:23:36.326970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:23:36.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.337380 kernel: audit: type=1130 audit(1765563816.326:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.337088 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:23:36.347602 kernel: audit: type=1130 audit(1765563816.337:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.338829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:23:36.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.343242 systemd-tmpfiles[323]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:23:36.356828 kernel: audit: type=1130 audit(1765563816.347:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.351150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:23:36.363000 audit: BPF prog-id=6 op=LOAD Dec 12 18:23:36.365237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:23:36.369433 kernel: audit: type=1334 audit(1765563816.363:8): prog-id=6 op=LOAD Dec 12 18:23:36.370003 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:23:36.376973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:23:36.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.388197 kernel: audit: type=1130 audit(1765563816.379:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.396413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:23:36.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.398965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:23:36.406370 kernel: audit: type=1130 audit(1765563816.397:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.410145 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:23:36.442063 dracut-cmdline[346]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:23:36.451486 systemd-resolved[330]: Positive Trust Anchors: Dec 12 18:23:36.452533 systemd-resolved[330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:23:36.452539 systemd-resolved[330]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:23:36.452567 systemd-resolved[330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:23:36.480318 systemd-resolved[330]: Defaulting to hostname 'linux'. Dec 12 18:23:36.482419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:23:36.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.484188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:23:36.546904 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:23:36.560899 kernel: iscsi: registered transport (tcp) Dec 12 18:23:36.582944 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:23:36.582982 kernel: QLogic iSCSI HBA Driver Dec 12 18:23:36.610703 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:23:36.647231 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:23:36.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.650636 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:23:36.698010 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:23:36.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.700944 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:23:36.703258 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:23:36.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.735649 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:23:36.737000 audit: BPF prog-id=7 op=LOAD Dec 12 18:23:36.737000 audit: BPF prog-id=8 op=LOAD Dec 12 18:23:36.739027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:23:36.769202 systemd-udevd[585]: Using default interface naming scheme 'v257'. Dec 12 18:23:36.782618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:23:36.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.787005 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:23:36.817108 dracut-pre-trigger[654]: rd.md=0: removing MD RAID activation Dec 12 18:23:36.817575 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:23:36.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.820000 audit: BPF prog-id=9 op=LOAD Dec 12 18:23:36.822610 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:23:36.853219 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:23:36.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.857009 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:23:36.874126 systemd-networkd[696]: lo: Link UP Dec 12 18:23:36.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.874972 systemd-networkd[696]: lo: Gained carrier Dec 12 18:23:36.875749 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:23:36.876536 systemd[1]: Reached target network.target - Network. Dec 12 18:23:36.958354 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:23:36.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:36.963024 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:23:37.111382 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 18:23:37.118028 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:23:37.130942 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 18:23:37.141221 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 18:23:37.279465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:23:37.285347 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:23:37.288216 kernel: AES CTR mode by8 optimization enabled Dec 12 18:23:37.301964 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:23:37.339985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:23:37.341083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:23:37.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:37.342856 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:23:37.349382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:23:37.354216 disk-uuid[823]: Primary Header is updated. Dec 12 18:23:37.354216 disk-uuid[823]: Secondary Entries is updated. Dec 12 18:23:37.354216 disk-uuid[823]: Secondary Header is updated. Dec 12 18:23:37.361259 systemd-networkd[696]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:23:37.361274 systemd-networkd[696]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:23:37.364161 systemd-networkd[696]: eth0: Link UP Dec 12 18:23:37.365500 systemd-networkd[696]: eth0: Gained carrier Dec 12 18:23:37.365513 systemd-networkd[696]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:23:37.471805 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:23:37.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:37.528772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:23:37.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:37.532226 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:23:37.533289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:23:37.535427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:23:37.540006 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:23:37.572168 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:23:37.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.252035 systemd-networkd[696]: eth0: DHCPv4 address 172.234.207.166/24, gateway 172.234.207.1 acquired from 23.205.167.214 Dec 12 18:23:38.402213 disk-uuid[824]: Warning: The kernel is still using the old partition table. Dec 12 18:23:38.402213 disk-uuid[824]: The new table will be used at the next reboot or after you Dec 12 18:23:38.402213 disk-uuid[824]: run partprobe(8) or kpartx(8) Dec 12 18:23:38.402213 disk-uuid[824]: The operation has completed successfully. Dec 12 18:23:38.412202 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:23:38.412345 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:23:38.433431 kernel: kauditd_printk_skb: 17 callbacks suppressed Dec 12 18:23:38.433460 kernel: audit: type=1130 audit(1765563818.412:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.433476 kernel: audit: type=1131 audit(1765563818.412:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.415102 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:23:38.468468 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (852) Dec 12 18:23:38.468538 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:23:38.471955 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:23:38.481513 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:23:38.481535 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:23:38.481556 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:23:38.493897 kernel: BTRFS info (device sda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:23:38.494910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:23:38.503567 kernel: audit: type=1130 audit(1765563818.494:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.496776 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:23:38.630957 ignition[871]: Ignition 2.22.0 Dec 12 18:23:38.630974 ignition[871]: Stage: fetch-offline Dec 12 18:23:38.631014 ignition[871]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:23:38.631026 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:23:38.643748 kernel: audit: type=1130 audit(1765563818.635:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.634704 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:23:38.631122 ignition[871]: parsed url from cmdline: "" Dec 12 18:23:38.638038 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:23:38.631126 ignition[871]: no config URL provided Dec 12 18:23:38.631132 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:23:38.631143 ignition[871]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:23:38.631148 ignition[871]: failed to fetch config: resource requires networking Dec 12 18:23:38.631396 ignition[871]: Ignition finished successfully Dec 12 18:23:38.690442 ignition[877]: Ignition 2.22.0 Dec 12 18:23:38.690460 ignition[877]: Stage: fetch Dec 12 18:23:38.690617 ignition[877]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:23:38.690630 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:23:38.690736 ignition[877]: parsed url from cmdline: "" Dec 12 18:23:38.690742 ignition[877]: no config URL provided Dec 12 18:23:38.690748 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:23:38.690757 ignition[877]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:23:38.690806 ignition[877]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 12 18:23:38.737101 systemd-networkd[696]: eth0: Gained IPv6LL Dec 12 18:23:38.771543 ignition[877]: PUT result: OK Dec 12 18:23:38.771673 ignition[877]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 12 18:23:38.883474 ignition[877]: GET result: OK Dec 12 18:23:38.884582 ignition[877]: parsing config with SHA512: 29b71b8e34af7bfb41104114e4b4ef80672072d4e92bd8f0f66713dc762a9e7930c289c6b2cb2cd6315027437743a468a8bc36abea7dced73da933df0a89db1e Dec 12 18:23:38.893771 unknown[877]: fetched base config from "system" Dec 12 18:23:38.893785 unknown[877]: fetched base config from "system" Dec 12 18:23:38.894133 ignition[877]: fetch: fetch complete Dec 12 18:23:38.893792 unknown[877]: fetched user config from "akamai" Dec 12 18:23:38.894139 ignition[877]: fetch: fetch passed Dec 12 18:23:38.897142 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:23:38.907177 kernel: audit: type=1130 audit(1765563818.897:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.894188 ignition[877]: Ignition finished successfully Dec 12 18:23:38.901036 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:23:38.934263 ignition[884]: Ignition 2.22.0 Dec 12 18:23:38.934278 ignition[884]: Stage: kargs Dec 12 18:23:38.934452 ignition[884]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:23:38.934744 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:23:38.935689 ignition[884]: kargs: kargs passed Dec 12 18:23:38.935744 ignition[884]: Ignition finished successfully Dec 12 18:23:38.940284 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:23:38.950225 kernel: audit: type=1130 audit(1765563818.940:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.944074 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:23:38.974854 ignition[890]: Ignition 2.22.0 Dec 12 18:23:38.975851 ignition[890]: Stage: disks Dec 12 18:23:38.976035 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:23:38.976046 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:23:38.976835 ignition[890]: disks: disks passed Dec 12 18:23:38.978542 ignition[890]: Ignition finished successfully Dec 12 18:23:38.982664 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:23:38.991627 kernel: audit: type=1130 audit(1765563818.982:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:38.983716 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:23:38.992420 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:23:38.994100 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:23:38.995754 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:23:38.997211 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:23:38.999909 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:23:39.042534 systemd-fsck[898]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 18:23:39.045440 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:23:39.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.050976 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:23:39.057443 kernel: audit: type=1130 audit(1765563819.046:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.169907 kernel: EXT4-fs (sda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 12 18:23:39.170988 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:23:39.172302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:23:39.175037 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:23:39.179036 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:23:39.181152 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:23:39.181195 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:23:39.181219 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:23:39.192775 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:23:39.196049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:23:39.202904 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (906) Dec 12 18:23:39.207903 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:23:39.207927 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:23:39.218957 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:23:39.219520 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:23:39.219540 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:23:39.223808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:23:39.270451 initrd-setup-root[930]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:23:39.275213 initrd-setup-root[937]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:23:39.280811 initrd-setup-root[944]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:23:39.285258 initrd-setup-root[951]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:23:39.381181 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:23:39.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.384971 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:23:39.392161 kernel: audit: type=1130 audit(1765563819.381:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.400630 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:23:39.413904 kernel: BTRFS info (device sda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:23:39.448400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:23:39.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.457288 ignition[1019]: INFO : Ignition 2.22.0 Dec 12 18:23:39.457288 ignition[1019]: INFO : Stage: mount Dec 12 18:23:39.457288 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:23:39.457288 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:23:39.457288 ignition[1019]: INFO : mount: mount passed Dec 12 18:23:39.457288 ignition[1019]: INFO : Ignition finished successfully Dec 12 18:23:39.463566 kernel: audit: type=1130 audit(1765563819.449:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:39.452318 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:23:39.458251 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:23:39.460618 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:23:39.478150 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:23:39.502904 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1032) Dec 12 18:23:39.507105 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:23:39.507130 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:23:39.514938 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:23:39.514966 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:23:39.517150 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:23:39.521866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:23:39.553129 ignition[1048]: INFO : Ignition 2.22.0 Dec 12 18:23:39.553129 ignition[1048]: INFO : Stage: files Dec 12 18:23:39.555816 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:23:39.555816 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:23:39.555816 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:23:39.558907 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:23:39.558907 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:23:39.583191 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:23:39.583191 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:23:39.583191 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:23:39.583191 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:23:39.583191 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:23:39.565218 unknown[1048]: wrote ssh authorized keys file for user: core Dec 12 18:23:39.879530 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:23:40.835906 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:23:40.835906 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:23:40.835906 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:23:40.835906 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:23:40.835906 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:23:40.835906 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:23:40.835906 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:23:40.835906 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:23:40.846341 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:23:40.846341 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:23:40.846341 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:23:40.846341 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:23:40.846341 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:23:40.846341 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:23:40.846341 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 12 18:23:41.324569 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:23:41.594129 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:23:41.594129 ignition[1048]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:23:41.597998 ignition[1048]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:23:41.599342 ignition[1048]: INFO : files: files passed Dec 12 18:23:41.599342 ignition[1048]: INFO : Ignition finished successfully Dec 12 18:23:41.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.601496 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:23:41.608039 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:23:41.611178 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:23:41.620784 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:23:41.623200 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:23:41.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.635552 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:23:41.635552 initrd-setup-root-after-ignition[1081]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:23:41.639017 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:23:41.640386 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:23:41.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.641566 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:23:41.643757 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:23:41.688330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:23:41.688480 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:23:41.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.690551 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:23:41.691620 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:23:41.693463 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:23:41.694407 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:23:41.729552 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:23:41.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.731924 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:23:41.750436 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:23:41.750664 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:23:41.752421 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:23:41.754069 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:23:41.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.754809 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:23:41.755020 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:23:41.756806 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:23:41.757856 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:23:41.759237 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:23:41.760901 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:23:41.762346 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:23:41.763821 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:23:41.765593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:23:41.767203 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:23:41.768824 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:23:41.770445 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:23:41.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.772010 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:23:41.773540 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:23:41.773672 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:23:41.775581 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:23:41.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.776645 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:23:41.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.778014 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:23:41.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.778137 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:23:41.800848 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:23:41.801161 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:23:41.803028 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:23:41.803146 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:23:41.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.804198 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:23:41.804302 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:23:41.807940 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:23:41.808817 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:23:41.809039 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:23:41.819060 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:23:41.820053 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:23:41.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.821000 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:23:41.823117 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:23:41.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.823272 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:23:41.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.826002 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:23:41.826152 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:23:41.835981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:23:41.837295 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:23:41.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.844922 ignition[1105]: INFO : Ignition 2.22.0 Dec 12 18:23:41.844922 ignition[1105]: INFO : Stage: umount Dec 12 18:23:41.844922 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:23:41.844922 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:23:41.850782 ignition[1105]: INFO : umount: umount passed Dec 12 18:23:41.850782 ignition[1105]: INFO : Ignition finished successfully Dec 12 18:23:41.849365 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:23:41.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.851959 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:23:41.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.853340 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:23:41.853421 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:23:41.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.856953 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:23:41.857064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:23:41.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.857858 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:23:41.857931 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:23:41.859441 systemd[1]: Stopped target network.target - Network. Dec 12 18:23:41.860138 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:23:41.860228 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:23:41.862520 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:23:41.863591 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:23:41.864581 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:23:41.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.865958 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:23:41.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.867126 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:23:41.868613 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:23:41.868674 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:23:41.870304 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:23:41.870364 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:23:41.872053 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 18:23:41.872100 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:23:41.873499 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:23:41.873586 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:23:41.875195 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:23:41.875258 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:23:41.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.877327 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:23:41.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.878493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:23:41.885530 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:23:41.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.886374 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:23:41.887536 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:23:41.889086 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:23:41.889200 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:23:41.895000 audit: BPF prog-id=6 op=UNLOAD Dec 12 18:23:41.895000 audit: BPF prog-id=9 op=UNLOAD Dec 12 18:23:41.891728 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:23:41.891840 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:23:41.897130 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:23:41.898611 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:23:41.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.898671 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:23:41.900301 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:23:41.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.900366 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:23:41.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.902940 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:23:41.906323 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:23:41.906384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:23:41.907241 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:23:41.907311 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:23:41.908076 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:23:41.908127 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:23:41.909569 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:23:41.923203 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:23:41.923394 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:23:41.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.926649 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:23:41.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.926700 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:23:41.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.931553 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:23:41.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.931593 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:23:41.932307 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:23:41.932366 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:23:41.933671 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:23:41.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.933721 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:23:41.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.935302 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:23:41.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.935353 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:23:41.939004 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:23:41.939778 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:23:41.939837 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:23:41.943423 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:23:41.943480 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:23:41.944858 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:23:41.944930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:23:41.977592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:23:41.983184 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:23:41.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.985797 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:23:41.985958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:23:41.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:41.987805 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:23:41.989716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:23:42.007971 systemd[1]: Switching root. Dec 12 18:23:42.045862 systemd-journald[304]: Journal stopped Dec 12 18:23:43.327410 systemd-journald[304]: Received SIGTERM from PID 1 (systemd). Dec 12 18:23:43.327439 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:23:43.327452 kernel: SELinux: policy capability open_perms=1 Dec 12 18:23:43.327463 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:23:43.327472 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:23:43.327484 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:23:43.327494 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:23:43.327504 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:23:43.327514 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:23:43.327523 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:23:43.327533 systemd[1]: Successfully loaded SELinux policy in 83.609ms. Dec 12 18:23:43.327547 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.228ms. Dec 12 18:23:43.327558 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:23:43.327569 systemd[1]: Detected virtualization kvm. Dec 12 18:23:43.327583 systemd[1]: Detected architecture x86-64. Dec 12 18:23:43.327596 systemd[1]: Detected first boot. Dec 12 18:23:43.327608 systemd[1]: Initializing machine ID from random generator. Dec 12 18:23:43.327618 kernel: Guest personality initialized and is inactive Dec 12 18:23:43.327628 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:23:43.327638 kernel: Initialized host personality Dec 12 18:23:43.327650 zram_generator::config[1152]: No configuration found. Dec 12 18:23:43.327662 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:23:43.327672 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:23:43.327683 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:23:43.327693 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:23:43.327704 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:23:43.327718 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:23:43.327731 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:23:43.327742 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:23:43.327753 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:23:43.327764 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:23:43.327775 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:23:43.327787 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:23:43.327798 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:23:43.327809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:23:43.327820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:23:43.327831 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:23:43.327842 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:23:43.327852 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:23:43.327866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:23:43.327920 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:23:43.327934 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:23:43.327945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:23:43.327956 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:23:43.327967 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:23:43.327980 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:23:43.327992 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:23:43.328003 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:23:43.328013 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:23:43.328024 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 18:23:43.328035 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:23:43.328046 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:23:43.328059 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:23:43.328070 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:23:43.328081 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:23:43.328092 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:23:43.328106 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 18:23:43.328117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:23:43.328128 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 18:23:43.328139 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 18:23:43.328150 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:23:43.328161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:23:43.328174 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:23:43.328185 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:23:43.328195 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:23:43.328206 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:23:43.328217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:23:43.328228 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:23:43.328240 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:23:43.328253 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:23:43.328264 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:23:43.328275 systemd[1]: Reached target machines.target - Containers. Dec 12 18:23:43.328286 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:23:43.328298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:23:43.328309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:23:43.328319 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:23:43.328332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:23:43.328343 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:23:43.328354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:23:43.328365 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:23:43.328376 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:23:43.328387 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:23:43.328401 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:23:43.328412 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:23:43.328423 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:23:43.328433 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:23:43.328445 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:23:43.328456 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:23:43.328467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:23:43.328479 kernel: fuse: init (API version 7.41) Dec 12 18:23:43.328490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:23:43.328501 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:23:43.328512 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:23:43.328523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:23:43.328535 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:23:43.328548 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:23:43.328559 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:23:43.328570 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:23:43.328581 kernel: ACPI: bus type drm_connector registered Dec 12 18:23:43.328591 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:23:43.328602 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:23:43.328613 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:23:43.328646 systemd-journald[1233]: Collecting audit messages is enabled. Dec 12 18:23:43.328670 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:23:43.328682 systemd-journald[1233]: Journal started Dec 12 18:23:43.328704 systemd-journald[1233]: Runtime Journal (/run/log/journal/f4488b84561143bc9ed3977308cc4b3f) is 8M, max 78.1M, 70.1M free. Dec 12 18:23:43.007000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 18:23:43.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.211000 audit: BPF prog-id=14 op=UNLOAD Dec 12 18:23:43.211000 audit: BPF prog-id=13 op=UNLOAD Dec 12 18:23:43.212000 audit: BPF prog-id=15 op=LOAD Dec 12 18:23:43.212000 audit: BPF prog-id=16 op=LOAD Dec 12 18:23:43.212000 audit: BPF prog-id=17 op=LOAD Dec 12 18:23:43.322000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 18:23:43.322000 audit[1233]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcc3f17050 a2=4000 a3=0 items=0 ppid=1 pid=1233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:43.322000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 18:23:42.869779 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:23:42.894259 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:23:42.894906 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:23:43.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.333918 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:23:43.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.335739 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:23:43.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.336972 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:23:43.337242 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:23:43.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.338345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:23:43.338541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:23:43.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.340129 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:23:43.340418 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:23:43.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.341624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:23:43.341817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:23:43.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.343188 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:23:43.343376 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:23:43.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.344712 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:23:43.345006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:23:43.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.346290 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:23:43.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.347535 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:23:43.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.350285 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:23:43.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.351553 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:23:43.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.368857 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:23:43.370799 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 18:23:43.374983 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:23:43.378981 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:23:43.381922 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:23:43.381954 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:23:43.383637 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:23:43.385925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:23:43.386112 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:23:43.391122 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:23:43.396008 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:23:43.396779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:23:43.397937 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:23:43.399069 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:23:43.401824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:23:43.407252 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:23:43.411254 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:23:43.415133 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:23:43.415996 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:23:43.417606 systemd-journald[1233]: Time spent on flushing to /var/log/journal/f4488b84561143bc9ed3977308cc4b3f is 54.109ms for 1122 entries. Dec 12 18:23:43.417606 systemd-journald[1233]: System Journal (/var/log/journal/f4488b84561143bc9ed3977308cc4b3f) is 8M, max 588.1M, 580.1M free. Dec 12 18:23:43.484331 systemd-journald[1233]: Received client request to flush runtime journal. Dec 12 18:23:43.484369 kernel: kauditd_printk_skb: 91 callbacks suppressed Dec 12 18:23:43.484386 kernel: audit: type=1130 audit(1765563823.461:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.484401 kernel: loop1: detected capacity change from 0 to 111544 Dec 12 18:23:43.484415 kernel: audit: type=1130 audit(1765563823.474:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.461240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:23:43.474577 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:23:43.475589 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:23:43.500936 kernel: audit: type=1130 audit(1765563823.490:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.486033 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:23:43.489589 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:23:43.509010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:23:43.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.517710 kernel: audit: type=1130 audit(1765563823.509:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.532659 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:23:43.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.543740 kernel: audit: type=1130 audit(1765563823.533:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.543822 kernel: loop2: detected capacity change from 0 to 119256 Dec 12 18:23:43.540023 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 18:23:43.549755 kernel: audit: type=1334 audit(1765563823.536:132): prog-id=18 op=LOAD Dec 12 18:23:43.552444 kernel: audit: type=1334 audit(1765563823.536:133): prog-id=19 op=LOAD Dec 12 18:23:43.536000 audit: BPF prog-id=18 op=LOAD Dec 12 18:23:43.536000 audit: BPF prog-id=19 op=LOAD Dec 12 18:23:43.548996 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:23:43.555213 kernel: audit: type=1334 audit(1765563823.536:134): prog-id=20 op=LOAD Dec 12 18:23:43.536000 audit: BPF prog-id=20 op=LOAD Dec 12 18:23:43.554009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:23:43.546000 audit: BPF prog-id=21 op=LOAD Dec 12 18:23:43.556888 kernel: audit: type=1334 audit(1765563823.546:135): prog-id=21 op=LOAD Dec 12 18:23:43.563517 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:23:43.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.572899 kernel: audit: type=1130 audit(1765563823.565:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.580000 audit: BPF prog-id=22 op=LOAD Dec 12 18:23:43.580000 audit: BPF prog-id=23 op=LOAD Dec 12 18:23:43.580000 audit: BPF prog-id=24 op=LOAD Dec 12 18:23:43.582487 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:23:43.586000 audit: BPF prog-id=25 op=LOAD Dec 12 18:23:43.586000 audit: BPF prog-id=26 op=LOAD Dec 12 18:23:43.586000 audit: BPF prog-id=27 op=LOAD Dec 12 18:23:43.588065 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 18:23:43.608768 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Dec 12 18:23:43.609068 kernel: loop3: detected capacity change from 0 to 8 Dec 12 18:23:43.609129 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Dec 12 18:23:43.617560 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:23:43.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.629996 kernel: loop4: detected capacity change from 0 to 219144 Dec 12 18:23:43.674908 kernel: loop5: detected capacity change from 0 to 111544 Dec 12 18:23:43.678715 systemd-nsresourced[1298]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 18:23:43.689599 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:23:43.694892 kernel: loop6: detected capacity change from 0 to 119256 Dec 12 18:23:43.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.699210 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 18:23:43.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:43.718084 kernel: loop7: detected capacity change from 0 to 8 Dec 12 18:23:43.730068 kernel: loop1: detected capacity change from 0 to 219144 Dec 12 18:23:43.744509 (sd-merge)[1302]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Dec 12 18:23:43.764924 (sd-merge)[1302]: Merged extensions into '/usr'. Dec 12 18:23:43.780020 systemd[1]: Reload requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:23:43.780036 systemd[1]: Reloading... Dec 12 18:23:43.895705 systemd-oomd[1290]: No swap; memory pressure usage will be degraded Dec 12 18:23:43.907884 systemd-resolved[1292]: Positive Trust Anchors: Dec 12 18:23:43.908907 systemd-resolved[1292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:23:43.908994 systemd-resolved[1292]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:23:43.909065 systemd-resolved[1292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:23:43.920765 systemd-resolved[1292]: Defaulting to hostname 'linux'. Dec 12 18:23:43.925905 zram_generator::config[1346]: No configuration found. Dec 12 18:23:44.159034 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:23:44.159648 systemd[1]: Reloading finished in 379 ms. Dec 12 18:23:44.192825 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 18:23:44.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.193917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:23:44.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.217236 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:23:44.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.218346 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:23:44.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.223134 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:23:44.228209 systemd[1]: Starting ensure-sysext.service... Dec 12 18:23:44.232038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:23:44.231000 audit: BPF prog-id=8 op=UNLOAD Dec 12 18:23:44.231000 audit: BPF prog-id=7 op=UNLOAD Dec 12 18:23:44.232000 audit: BPF prog-id=28 op=LOAD Dec 12 18:23:44.232000 audit: BPF prog-id=29 op=LOAD Dec 12 18:23:44.235152 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:23:44.242000 audit: BPF prog-id=30 op=LOAD Dec 12 18:23:44.242000 audit: BPF prog-id=22 op=UNLOAD Dec 12 18:23:44.242000 audit: BPF prog-id=31 op=LOAD Dec 12 18:23:44.242000 audit: BPF prog-id=32 op=LOAD Dec 12 18:23:44.242000 audit: BPF prog-id=23 op=UNLOAD Dec 12 18:23:44.242000 audit: BPF prog-id=24 op=UNLOAD Dec 12 18:23:44.243000 audit: BPF prog-id=33 op=LOAD Dec 12 18:23:44.244000 audit: BPF prog-id=25 op=UNLOAD Dec 12 18:23:44.244000 audit: BPF prog-id=34 op=LOAD Dec 12 18:23:44.246000 audit: BPF prog-id=35 op=LOAD Dec 12 18:23:44.246000 audit: BPF prog-id=26 op=UNLOAD Dec 12 18:23:44.246000 audit: BPF prog-id=27 op=UNLOAD Dec 12 18:23:44.247000 audit: BPF prog-id=36 op=LOAD Dec 12 18:23:44.247000 audit: BPF prog-id=21 op=UNLOAD Dec 12 18:23:44.250000 audit: BPF prog-id=37 op=LOAD Dec 12 18:23:44.251000 audit: BPF prog-id=15 op=UNLOAD Dec 12 18:23:44.251000 audit: BPF prog-id=38 op=LOAD Dec 12 18:23:44.251000 audit: BPF prog-id=39 op=LOAD Dec 12 18:23:44.251000 audit: BPF prog-id=16 op=UNLOAD Dec 12 18:23:44.251000 audit: BPF prog-id=17 op=UNLOAD Dec 12 18:23:44.253000 audit: BPF prog-id=40 op=LOAD Dec 12 18:23:44.255000 audit: BPF prog-id=18 op=UNLOAD Dec 12 18:23:44.255000 audit: BPF prog-id=41 op=LOAD Dec 12 18:23:44.255000 audit: BPF prog-id=42 op=LOAD Dec 12 18:23:44.255000 audit: BPF prog-id=19 op=UNLOAD Dec 12 18:23:44.255000 audit: BPF prog-id=20 op=UNLOAD Dec 12 18:23:44.268666 systemd[1]: Reload requested from client PID 1389 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:23:44.268686 systemd[1]: Reloading... Dec 12 18:23:44.274508 systemd-udevd[1391]: Using default interface naming scheme 'v257'. Dec 12 18:23:44.283301 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:23:44.283621 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:23:44.284134 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:23:44.285430 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Dec 12 18:23:44.285499 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Dec 12 18:23:44.301271 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:23:44.301424 systemd-tmpfiles[1390]: Skipping /boot Dec 12 18:23:44.320012 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:23:44.320145 systemd-tmpfiles[1390]: Skipping /boot Dec 12 18:23:44.401939 zram_generator::config[1450]: No configuration found. Dec 12 18:23:44.496009 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:23:44.550912 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:23:44.558904 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:23:44.563296 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:23:44.602903 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:23:44.624613 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:23:44.624952 systemd[1]: Reloading finished in 355 ms. Dec 12 18:23:44.635666 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:23:44.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.638511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:23:44.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.642000 audit: BPF prog-id=43 op=LOAD Dec 12 18:23:44.645000 audit: BPF prog-id=36 op=UNLOAD Dec 12 18:23:44.646000 audit: BPF prog-id=44 op=LOAD Dec 12 18:23:44.646000 audit: BPF prog-id=40 op=UNLOAD Dec 12 18:23:44.646000 audit: BPF prog-id=45 op=LOAD Dec 12 18:23:44.646000 audit: BPF prog-id=46 op=LOAD Dec 12 18:23:44.646000 audit: BPF prog-id=41 op=UNLOAD Dec 12 18:23:44.646000 audit: BPF prog-id=42 op=UNLOAD Dec 12 18:23:44.647000 audit: BPF prog-id=47 op=LOAD Dec 12 18:23:44.648000 audit: BPF prog-id=37 op=UNLOAD Dec 12 18:23:44.648000 audit: BPF prog-id=48 op=LOAD Dec 12 18:23:44.648000 audit: BPF prog-id=49 op=LOAD Dec 12 18:23:44.648000 audit: BPF prog-id=38 op=UNLOAD Dec 12 18:23:44.648000 audit: BPF prog-id=39 op=UNLOAD Dec 12 18:23:44.650000 audit: BPF prog-id=50 op=LOAD Dec 12 18:23:44.651000 audit: BPF prog-id=33 op=UNLOAD Dec 12 18:23:44.651000 audit: BPF prog-id=51 op=LOAD Dec 12 18:23:44.651000 audit: BPF prog-id=52 op=LOAD Dec 12 18:23:44.651000 audit: BPF prog-id=34 op=UNLOAD Dec 12 18:23:44.651000 audit: BPF prog-id=35 op=UNLOAD Dec 12 18:23:44.652000 audit: BPF prog-id=53 op=LOAD Dec 12 18:23:44.652000 audit: BPF prog-id=30 op=UNLOAD Dec 12 18:23:44.652000 audit: BPF prog-id=54 op=LOAD Dec 12 18:23:44.652000 audit: BPF prog-id=55 op=LOAD Dec 12 18:23:44.652000 audit: BPF prog-id=31 op=UNLOAD Dec 12 18:23:44.652000 audit: BPF prog-id=32 op=UNLOAD Dec 12 18:23:44.652000 audit: BPF prog-id=56 op=LOAD Dec 12 18:23:44.652000 audit: BPF prog-id=57 op=LOAD Dec 12 18:23:44.652000 audit: BPF prog-id=28 op=UNLOAD Dec 12 18:23:44.652000 audit: BPF prog-id=29 op=UNLOAD Dec 12 18:23:44.684237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:23:44.688075 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:23:44.691721 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:23:44.692633 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:23:44.695147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:23:44.701192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:23:44.706310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:23:44.707759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:23:44.708121 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:23:44.711142 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:23:44.711927 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:23:44.715132 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:23:44.717000 audit: BPF prog-id=58 op=LOAD Dec 12 18:23:44.718901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:23:44.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.724167 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:23:44.726970 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:23:44.728692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:23:44.728972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:23:44.745157 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:23:44.745338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:23:44.751656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:23:44.753275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:23:44.753716 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:23:44.754030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:23:44.754177 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:23:44.760505 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:23:44.762917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:23:44.781621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:23:44.782583 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:23:44.782759 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:23:44.782848 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:23:44.782989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:23:44.785000 audit[1518]: SYSTEM_BOOT pid=1518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.787407 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:23:44.789450 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:23:44.793535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:23:44.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.795006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:23:44.796377 systemd[1]: Finished ensure-sysext.service. Dec 12 18:23:44.810277 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:23:44.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.814083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:23:44.814000 audit: BPF prog-id=59 op=LOAD Dec 12 18:23:44.818192 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:23:44.843493 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:23:44.846108 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:23:44.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.848324 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:23:44.856695 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:23:44.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.859120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:23:44.861024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:23:44.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.862690 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:23:44.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:44.863970 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:23:44.871490 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:23:44.904423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:23:44.911000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 18:23:44.913179 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:23:44.916971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:23:44.911000 audit[1555]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe51fc5e30 a2=420 a3=0 items=0 ppid=1509 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:44.911000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:23:44.918908 augenrules[1555]: No rules Dec 12 18:23:44.924545 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:23:44.925222 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:23:44.994940 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:23:45.177144 systemd-networkd[1517]: lo: Link UP Dec 12 18:23:45.177531 systemd-networkd[1517]: lo: Gained carrier Dec 12 18:23:45.180305 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:23:45.182543 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:23:45.182626 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:23:45.192580 systemd-networkd[1517]: eth0: Link UP Dec 12 18:23:45.194665 systemd-networkd[1517]: eth0: Gained carrier Dec 12 18:23:45.194684 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:23:45.304835 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:23:45.309675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:23:45.316145 systemd[1]: Reached target network.target - Network. Dec 12 18:23:45.317684 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:23:45.322056 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:23:45.326167 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:23:45.363200 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:23:45.446157 ldconfig[1514]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:23:45.449543 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:23:45.451978 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:23:45.475091 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:23:45.476103 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:23:45.477149 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:23:45.477951 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:23:45.478708 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:23:45.479637 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:23:45.480630 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:23:45.481427 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 18:23:45.482271 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 18:23:45.483058 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:23:45.483804 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:23:45.483841 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:23:45.484522 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:23:45.486554 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:23:45.488746 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:23:45.492079 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:23:45.493263 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:23:45.494131 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:23:45.497990 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:23:45.499050 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:23:45.500449 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:23:45.501928 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:23:45.502616 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:23:45.503549 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:23:45.503588 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:23:45.505152 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:23:45.508138 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:23:45.512022 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:23:45.515084 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:23:45.520079 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:23:45.524115 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:23:45.525935 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:23:45.530245 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:23:45.539695 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:23:45.551795 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:23:45.554662 jq[1585]: false Dec 12 18:23:45.555035 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:23:45.559017 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:23:45.570735 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Refreshing passwd entry cache Dec 12 18:23:45.575867 oslogin_cache_refresh[1587]: Refreshing passwd entry cache Dec 12 18:23:45.572446 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:23:45.574656 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:23:45.575211 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:23:45.577083 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:23:45.583522 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:23:45.592735 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Failure getting users, quitting Dec 12 18:23:45.593414 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:23:45.595980 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:23:45.596060 oslogin_cache_refresh[1587]: Failure getting users, quitting Dec 12 18:23:45.596141 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:23:45.596472 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:23:45.597095 oslogin_cache_refresh[1587]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:23:45.597199 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Refreshing group entry cache Dec 12 18:23:45.598016 oslogin_cache_refresh[1587]: Refreshing group entry cache Dec 12 18:23:45.600757 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Failure getting groups, quitting Dec 12 18:23:45.600757 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:23:45.598516 oslogin_cache_refresh[1587]: Failure getting groups, quitting Dec 12 18:23:45.598527 oslogin_cache_refresh[1587]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:23:45.605405 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:23:45.606966 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:23:45.611968 update_engine[1595]: I20251212 18:23:45.610906 1595 main.cc:92] Flatcar Update Engine starting Dec 12 18:23:45.611719 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:23:45.612524 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:23:45.617100 coreos-metadata[1582]: Dec 12 18:23:45.615 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:23:45.617299 extend-filesystems[1586]: Found /dev/sda6 Dec 12 18:23:45.638981 jq[1596]: true Dec 12 18:23:45.642315 extend-filesystems[1586]: Found /dev/sda9 Dec 12 18:23:45.657254 extend-filesystems[1586]: Checking size of /dev/sda9 Dec 12 18:23:45.666174 tar[1603]: linux-amd64/LICENSE Dec 12 18:23:45.667438 tar[1603]: linux-amd64/helm Dec 12 18:23:45.668272 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:23:45.668588 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:23:45.678676 extend-filesystems[1586]: Resized partition /dev/sda9 Dec 12 18:23:45.699111 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:23:45.703062 jq[1629]: true Dec 12 18:23:45.703260 extend-filesystems[1637]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:23:45.698625 dbus-daemon[1583]: [system] SELinux support is enabled Dec 12 18:23:45.711292 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:23:45.711322 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:23:45.714083 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:23:45.714109 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:23:45.726891 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Dec 12 18:23:45.727391 update_engine[1595]: I20251212 18:23:45.727344 1595 update_check_scheduler.cc:74] Next update check in 9m37s Dec 12 18:23:45.729385 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:23:45.739953 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:23:45.884768 systemd-logind[1594]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:23:45.885402 systemd-logind[1594]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:23:45.886031 systemd-logind[1594]: New seat seat0. Dec 12 18:23:45.889496 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:23:45.899152 bash[1661]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:23:45.900995 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:23:45.915467 systemd[1]: Starting sshkeys.service... Dec 12 18:23:45.969431 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:23:45.974130 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:23:45.992225 dbus-daemon[1583]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1517 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:23:45.992950 systemd-networkd[1517]: eth0: DHCPv4 address 172.234.207.166/24, gateway 172.234.207.1 acquired from 23.205.167.214 Dec 12 18:23:45.999427 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:23:46.002990 systemd-timesyncd[1532]: Network configuration changed, trying to establish connection. Dec 12 18:23:46.039564 containerd[1630]: time="2025-12-12T18:23:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:23:46.042654 containerd[1630]: time="2025-12-12T18:23:46.042555607Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 18:23:46.076511 containerd[1630]: time="2025-12-12T18:23:46.076433664Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.47µs" Dec 12 18:23:46.076511 containerd[1630]: time="2025-12-12T18:23:46.076495924Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:23:46.076607 containerd[1630]: time="2025-12-12T18:23:46.076559494Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:23:46.076607 containerd[1630]: time="2025-12-12T18:23:46.076575614Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:23:46.076889 containerd[1630]: time="2025-12-12T18:23:46.076787864Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:23:46.076889 containerd[1630]: time="2025-12-12T18:23:46.076821734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:23:46.078976 containerd[1630]: time="2025-12-12T18:23:46.078938496Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:23:46.078976 containerd[1630]: time="2025-12-12T18:23:46.078967966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:23:46.079523 containerd[1630]: time="2025-12-12T18:23:46.079484136Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:23:46.079554 containerd[1630]: time="2025-12-12T18:23:46.079517106Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:23:46.079554 containerd[1630]: time="2025-12-12T18:23:46.079539606Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:23:46.079554 containerd[1630]: time="2025-12-12T18:23:46.079550146Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:23:46.079909 containerd[1630]: time="2025-12-12T18:23:46.079857596Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:23:46.080281 containerd[1630]: time="2025-12-12T18:23:46.079952186Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:23:46.080281 containerd[1630]: time="2025-12-12T18:23:46.080066776Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:23:46.081419 containerd[1630]: time="2025-12-12T18:23:46.081385597Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:23:46.081479 containerd[1630]: time="2025-12-12T18:23:46.081440657Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:23:46.081901 containerd[1630]: time="2025-12-12T18:23:46.081473657Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:23:46.082054 containerd[1630]: time="2025-12-12T18:23:46.082024967Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:23:46.085448 containerd[1630]: time="2025-12-12T18:23:46.085413709Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:23:46.085576 containerd[1630]: time="2025-12-12T18:23:46.085545269Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:23:46.099239 coreos-metadata[1665]: Dec 12 18:23:46.098 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:23:46.102075 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:23:46.103357 dbus-daemon[1583]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:23:46.104915 dbus-daemon[1583]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1670 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107214520Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107337360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107443960Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107458020Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107479970Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107494630Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107507790Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107518270Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107530890Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107550130Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107562890Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107575360Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107584190Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:23:46.108332 containerd[1630]: time="2025-12-12T18:23:46.107607150Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:23:46.108599 containerd[1630]: time="2025-12-12T18:23:46.107754120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:23:46.108599 containerd[1630]: time="2025-12-12T18:23:46.107775630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:23:46.108599 containerd[1630]: time="2025-12-12T18:23:46.107794160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:23:46.108599 containerd[1630]: time="2025-12-12T18:23:46.107805410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:23:46.108599 containerd[1630]: time="2025-12-12T18:23:46.107820920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:23:46.108599 containerd[1630]: time="2025-12-12T18:23:46.107836170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:23:46.108599 containerd[1630]: time="2025-12-12T18:23:46.107849230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:23:46.108599 containerd[1630]: time="2025-12-12T18:23:46.107860110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:23:46.111132 containerd[1630]: time="2025-12-12T18:23:46.109936381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:23:46.111132 containerd[1630]: time="2025-12-12T18:23:46.110067221Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:23:46.111132 containerd[1630]: time="2025-12-12T18:23:46.110115241Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:23:46.111132 containerd[1630]: time="2025-12-12T18:23:46.110514091Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:23:46.111132 containerd[1630]: time="2025-12-12T18:23:46.110655501Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:23:46.111132 containerd[1630]: time="2025-12-12T18:23:46.110717451Z" level=info msg="Start snapshots syncer" Dec 12 18:23:46.112124 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:23:46.114518 containerd[1630]: time="2025-12-12T18:23:46.111111112Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:23:46.119800 containerd[1630]: time="2025-12-12T18:23:46.116256494Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:23:46.119800 containerd[1630]: time="2025-12-12T18:23:46.118273875Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:23:46.121348 containerd[1630]: time="2025-12-12T18:23:46.120312516Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:23:46.128913 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128019400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128167230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128184060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128196010Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128311440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128324620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128338850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128353720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128413430Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128562020Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128581870Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128591840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128612560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:23:46.128956 containerd[1630]: time="2025-12-12T18:23:46.128621730Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:23:46.148664 containerd[1630]: time="2025-12-12T18:23:46.128637880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:23:46.148664 containerd[1630]: time="2025-12-12T18:23:46.128649070Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:23:46.148664 containerd[1630]: time="2025-12-12T18:23:46.128662000Z" level=info msg="runtime interface created" Dec 12 18:23:46.148664 containerd[1630]: time="2025-12-12T18:23:46.128668260Z" level=info msg="created NRI interface" Dec 12 18:23:46.148664 containerd[1630]: time="2025-12-12T18:23:46.128677190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:23:46.148664 containerd[1630]: time="2025-12-12T18:23:46.128697230Z" level=info msg="Connect containerd service" Dec 12 18:23:46.148664 containerd[1630]: time="2025-12-12T18:23:46.128724580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:23:46.148664 containerd[1630]: time="2025-12-12T18:23:46.143625088Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:23:46.147393 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:23:46.152349 extend-filesystems[1637]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:23:46.152349 extend-filesystems[1637]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 12 18:23:46.152349 extend-filesystems[1637]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Dec 12 18:23:46.149077 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:23:46.164088 extend-filesystems[1586]: Resized filesystem in /dev/sda9 Dec 12 18:23:46.163337 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:23:46.965953 systemd-timesyncd[1532]: Contacted time server 72.14.186.59:123 (0.flatcar.pool.ntp.org). Dec 12 18:23:46.966072 systemd-timesyncd[1532]: Initial clock synchronization to Fri 2025-12-12 18:23:46.965493 UTC. Dec 12 18:23:46.966661 systemd-resolved[1292]: Clock change detected. Flushing caches. Dec 12 18:23:46.987478 coreos-metadata[1665]: Dec 12 18:23:46.986 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 12 18:23:47.095044 polkitd[1676]: Started polkitd version 126 Dec 12 18:23:47.106214 polkitd[1676]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:23:47.106476 polkitd[1676]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:23:47.106522 polkitd[1676]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:23:47.106733 polkitd[1676]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:23:47.106759 polkitd[1676]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:23:47.106795 polkitd[1676]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:23:47.111231 polkitd[1676]: Finished loading, compiling and executing 2 rules Dec 12 18:23:47.111526 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:23:47.113815 dbus-daemon[1583]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:23:47.114308 polkitd[1676]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:23:47.117190 containerd[1630]: time="2025-12-12T18:23:47.117165097Z" level=info msg="Start subscribing containerd event" Dec 12 18:23:47.117508 containerd[1630]: time="2025-12-12T18:23:47.117449717Z" level=info msg="Start recovering state" Dec 12 18:23:47.118572 containerd[1630]: time="2025-12-12T18:23:47.118555918Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:23:47.119057 containerd[1630]: time="2025-12-12T18:23:47.118968098Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:23:47.120566 containerd[1630]: time="2025-12-12T18:23:47.118749558Z" level=info msg="Start event monitor" Dec 12 18:23:47.121409 containerd[1630]: time="2025-12-12T18:23:47.121394689Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:23:47.121458 containerd[1630]: time="2025-12-12T18:23:47.121447409Z" level=info msg="Start streaming server" Dec 12 18:23:47.121554 containerd[1630]: time="2025-12-12T18:23:47.121541909Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:23:47.121594 containerd[1630]: time="2025-12-12T18:23:47.121584839Z" level=info msg="runtime interface starting up..." Dec 12 18:23:47.122005 containerd[1630]: time="2025-12-12T18:23:47.121970929Z" level=info msg="starting plugins..." Dec 12 18:23:47.122064 containerd[1630]: time="2025-12-12T18:23:47.122052739Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:23:47.123187 containerd[1630]: time="2025-12-12T18:23:47.122225290Z" level=info msg="containerd successfully booted in 0.295994s" Dec 12 18:23:47.122350 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:23:47.134808 systemd-hostnamed[1670]: Hostname set to <172-234-207-166> (transient) Dec 12 18:23:47.134848 systemd-resolved[1292]: System hostname changed to '172-234-207-166'. Dec 12 18:23:47.136490 coreos-metadata[1665]: Dec 12 18:23:47.136 INFO Fetch successful Dec 12 18:23:47.170099 update-ssh-keys[1702]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:23:47.171502 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:23:47.178598 systemd[1]: Finished sshkeys.service. Dec 12 18:23:47.220946 sshd_keygen[1613]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:23:47.245875 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:23:47.250193 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:23:47.267520 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:23:47.267843 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:23:47.273383 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:23:47.293956 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:23:47.297632 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:23:47.301280 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:23:47.302317 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:23:47.318480 tar[1603]: linux-amd64/README.md Dec 12 18:23:47.334267 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:23:47.412357 coreos-metadata[1582]: Dec 12 18:23:47.412 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:23:47.501040 coreos-metadata[1582]: Dec 12 18:23:47.501 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 12 18:23:47.686195 coreos-metadata[1582]: Dec 12 18:23:47.685 INFO Fetch successful Dec 12 18:23:47.686195 coreos-metadata[1582]: Dec 12 18:23:47.685 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 12 18:23:47.947896 coreos-metadata[1582]: Dec 12 18:23:47.947 INFO Fetch successful Dec 12 18:23:48.036110 systemd-networkd[1517]: eth0: Gained IPv6LL Dec 12 18:23:48.038602 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:23:48.045477 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:23:48.058225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:23:48.062260 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:23:48.080511 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:23:48.084823 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:23:48.101768 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:23:49.023817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:23:49.026185 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:23:49.027614 systemd[1]: Startup finished in 3.048s (kernel) + 6.417s (initrd) + 6.131s (userspace) = 15.597s. Dec 12 18:23:49.032517 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:23:49.508891 kubelet[1762]: E1212 18:23:49.508843 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:23:49.512748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:23:49.512963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:23:49.513468 systemd[1]: kubelet.service: Consumed 916ms CPU time, 256.8M memory peak. Dec 12 18:23:50.274167 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:23:50.275712 systemd[1]: Started sshd@0-172.234.207.166:22-139.178.89.65:37536.service - OpenSSH per-connection server daemon (139.178.89.65:37536). Dec 12 18:23:50.585512 sshd[1774]: Accepted publickey for core from 139.178.89.65 port 37536 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:23:50.587533 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:50.594393 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:23:50.595472 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:23:50.601516 systemd-logind[1594]: New session 1 of user core. Dec 12 18:23:50.618582 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:23:50.621348 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:23:50.635470 (systemd)[1779]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:23:50.638364 systemd-logind[1594]: New session c1 of user core. Dec 12 18:23:50.781847 systemd[1779]: Queued start job for default target default.target. Dec 12 18:23:50.791732 systemd[1779]: Created slice app.slice - User Application Slice. Dec 12 18:23:50.791761 systemd[1779]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 18:23:50.791774 systemd[1779]: Reached target paths.target - Paths. Dec 12 18:23:50.791821 systemd[1779]: Reached target timers.target - Timers. Dec 12 18:23:50.793692 systemd[1779]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:23:50.796177 systemd[1779]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 18:23:50.818450 systemd[1779]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 18:23:50.818644 systemd[1779]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:23:50.818782 systemd[1779]: Reached target sockets.target - Sockets. Dec 12 18:23:50.818829 systemd[1779]: Reached target basic.target - Basic System. Dec 12 18:23:50.818877 systemd[1779]: Reached target default.target - Main User Target. Dec 12 18:23:50.818912 systemd[1779]: Startup finished in 173ms. Dec 12 18:23:50.819098 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:23:50.826143 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:23:50.997119 systemd[1]: Started sshd@1-172.234.207.166:22-139.178.89.65:37540.service - OpenSSH per-connection server daemon (139.178.89.65:37540). Dec 12 18:23:51.294222 sshd[1792]: Accepted publickey for core from 139.178.89.65 port 37540 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:23:51.296346 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:51.303200 systemd-logind[1594]: New session 2 of user core. Dec 12 18:23:51.310193 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:23:51.454702 sshd[1795]: Connection closed by 139.178.89.65 port 37540 Dec 12 18:23:51.455572 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:51.460619 systemd[1]: sshd@1-172.234.207.166:22-139.178.89.65:37540.service: Deactivated successfully. Dec 12 18:23:51.462847 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:23:51.464200 systemd-logind[1594]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:23:51.465175 systemd-logind[1594]: Removed session 2. Dec 12 18:23:51.517465 systemd[1]: Started sshd@2-172.234.207.166:22-139.178.89.65:37544.service - OpenSSH per-connection server daemon (139.178.89.65:37544). Dec 12 18:23:51.842895 sshd[1801]: Accepted publickey for core from 139.178.89.65 port 37544 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:23:51.845378 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:51.853797 systemd-logind[1594]: New session 3 of user core. Dec 12 18:23:51.860198 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:23:51.996439 sshd[1804]: Connection closed by 139.178.89.65 port 37544 Dec 12 18:23:51.997142 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:52.005755 systemd[1]: sshd@2-172.234.207.166:22-139.178.89.65:37544.service: Deactivated successfully. Dec 12 18:23:52.008010 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:23:52.008797 systemd-logind[1594]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:23:52.010178 systemd-logind[1594]: Removed session 3. Dec 12 18:23:52.059514 systemd[1]: Started sshd@3-172.234.207.166:22-139.178.89.65:37558.service - OpenSSH per-connection server daemon (139.178.89.65:37558). Dec 12 18:23:52.373835 sshd[1810]: Accepted publickey for core from 139.178.89.65 port 37558 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:23:52.375455 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:52.381088 systemd-logind[1594]: New session 4 of user core. Dec 12 18:23:52.387138 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:23:52.540008 sshd[1813]: Connection closed by 139.178.89.65 port 37558 Dec 12 18:23:52.540751 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:52.546766 systemd-logind[1594]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:23:52.547237 systemd[1]: sshd@3-172.234.207.166:22-139.178.89.65:37558.service: Deactivated successfully. Dec 12 18:23:52.550194 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:23:52.552198 systemd-logind[1594]: Removed session 4. Dec 12 18:23:52.605842 systemd[1]: Started sshd@4-172.234.207.166:22-139.178.89.65:37570.service - OpenSSH per-connection server daemon (139.178.89.65:37570). Dec 12 18:23:52.909716 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 37570 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:23:52.911520 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:52.918310 systemd-logind[1594]: New session 5 of user core. Dec 12 18:23:52.926205 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:23:53.033637 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:23:53.034007 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:23:53.047394 sudo[1823]: pam_unix(sudo:session): session closed for user root Dec 12 18:23:53.098496 sshd[1822]: Connection closed by 139.178.89.65 port 37570 Dec 12 18:23:53.099189 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:53.103161 systemd[1]: sshd@4-172.234.207.166:22-139.178.89.65:37570.service: Deactivated successfully. Dec 12 18:23:53.105138 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:23:53.108157 systemd-logind[1594]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:23:53.109720 systemd-logind[1594]: Removed session 5. Dec 12 18:23:53.163393 systemd[1]: Started sshd@5-172.234.207.166:22-139.178.89.65:37576.service - OpenSSH per-connection server daemon (139.178.89.65:37576). Dec 12 18:23:53.493241 sshd[1829]: Accepted publickey for core from 139.178.89.65 port 37576 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:23:53.495434 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:53.502411 systemd-logind[1594]: New session 6 of user core. Dec 12 18:23:53.507207 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:23:53.609528 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:23:53.609910 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:23:53.616940 sudo[1834]: pam_unix(sudo:session): session closed for user root Dec 12 18:23:53.625239 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:23:53.625552 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:23:53.635280 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:23:53.681018 kernel: kauditd_printk_skb: 95 callbacks suppressed Dec 12 18:23:53.681069 kernel: audit: type=1305 audit(1765563833.673:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:23:53.673000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:23:53.676719 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:23:53.681187 augenrules[1856]: No rules Dec 12 18:23:53.677039 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:23:53.681689 sudo[1833]: pam_unix(sudo:session): session closed for user root Dec 12 18:23:53.673000 audit[1856]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc38e0d8e0 a2=420 a3=0 items=0 ppid=1837 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:53.673000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:23:53.692618 kernel: audit: type=1300 audit(1765563833.673:230): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc38e0d8e0 a2=420 a3=0 items=0 ppid=1837 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:53.692698 kernel: audit: type=1327 audit(1765563833.673:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:23:53.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.696527 kernel: audit: type=1130 audit(1765563833.676:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.702099 kernel: audit: type=1131 audit(1765563833.676:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.680000 audit[1833]: USER_END pid=1833 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.707816 kernel: audit: type=1106 audit(1765563833.680:233): pid=1833 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.680000 audit[1833]: CRED_DISP pid=1833 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.714200 kernel: audit: type=1104 audit(1765563833.680:234): pid=1833 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.735455 sshd[1832]: Connection closed by 139.178.89.65 port 37576 Dec 12 18:23:53.736241 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Dec 12 18:23:53.737000 audit[1829]: USER_END pid=1829 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:23:53.741401 systemd-logind[1594]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:23:53.743618 systemd[1]: sshd@5-172.234.207.166:22-139.178.89.65:37576.service: Deactivated successfully. Dec 12 18:23:53.745873 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:23:53.748618 systemd-logind[1594]: Removed session 6. Dec 12 18:23:53.749564 kernel: audit: type=1106 audit(1765563833.737:235): pid=1829 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:23:53.737000 audit[1829]: CRED_DISP pid=1829 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:23:53.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.234.207.166:22-139.178.89.65:37576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.758530 kernel: audit: type=1104 audit(1765563833.737:236): pid=1829 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:23:53.758568 kernel: audit: type=1131 audit(1765563833.742:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.234.207.166:22-139.178.89.65:37576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.234.207.166:22-139.178.89.65:37588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:53.800455 systemd[1]: Started sshd@6-172.234.207.166:22-139.178.89.65:37588.service - OpenSSH per-connection server daemon (139.178.89.65:37588). Dec 12 18:23:54.114000 audit[1865]: USER_ACCT pid=1865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:23:54.116227 sshd[1865]: Accepted publickey for core from 139.178.89.65 port 37588 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:23:54.116000 audit[1865]: CRED_ACQ pid=1865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:23:54.116000 audit[1865]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffacdf9cc0 a2=3 a3=0 items=0 ppid=1 pid=1865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:54.116000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:23:54.117942 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:23:54.124911 systemd-logind[1594]: New session 7 of user core. Dec 12 18:23:54.131179 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:23:54.133000 audit[1865]: USER_START pid=1865 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:23:54.136000 audit[1868]: CRED_ACQ pid=1868 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:23:54.229000 audit[1869]: USER_ACCT pid=1869 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:23:54.230513 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:23:54.229000 audit[1869]: CRED_REFR pid=1869 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:23:54.231076 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:23:54.232000 audit[1869]: USER_START pid=1869 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:23:54.618824 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:23:54.629300 (dockerd)[1887]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:23:54.887574 dockerd[1887]: time="2025-12-12T18:23:54.887426409Z" level=info msg="Starting up" Dec 12 18:23:54.888769 dockerd[1887]: time="2025-12-12T18:23:54.888749500Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:23:54.904151 dockerd[1887]: time="2025-12-12T18:23:54.904125308Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:23:54.919741 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1038987283-merged.mount: Deactivated successfully. Dec 12 18:23:54.928500 systemd[1]: var-lib-docker-metacopy\x2dcheck2170142207-merged.mount: Deactivated successfully. Dec 12 18:23:54.954790 dockerd[1887]: time="2025-12-12T18:23:54.954748583Z" level=info msg="Loading containers: start." Dec 12 18:23:54.967081 kernel: Initializing XFRM netlink socket Dec 12 18:23:55.031000 audit[1935]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.031000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffef99d5240 a2=0 a3=0 items=0 ppid=1887 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:23:55.033000 audit[1937]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.033000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe99607e90 a2=0 a3=0 items=0 ppid=1887 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:23:55.036000 audit[1939]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.036000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7c1885a0 a2=0 a3=0 items=0 ppid=1887 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:23:55.038000 audit[1941]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.038000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8ebb9d20 a2=0 a3=0 items=0 ppid=1887 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.038000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:23:55.040000 audit[1943]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.040000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffccdbc3d70 a2=0 a3=0 items=0 ppid=1887 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:23:55.043000 audit[1945]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.043000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1dff1ee0 a2=0 a3=0 items=0 ppid=1887 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:23:55.045000 audit[1947]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.045000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe7184e4c0 a2=0 a3=0 items=0 ppid=1887 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:23:55.048000 audit[1949]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.048000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe501be2c0 a2=0 a3=0 items=0 ppid=1887 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:23:55.077000 audit[1952]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.077000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe327c52c0 a2=0 a3=0 items=0 ppid=1887 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 18:23:55.080000 audit[1954]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.080000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe5ad5e790 a2=0 a3=0 items=0 ppid=1887 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:23:55.083000 audit[1956]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.083000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff360d68f0 a2=0 a3=0 items=0 ppid=1887 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.083000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:23:55.086000 audit[1958]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.086000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdacd54220 a2=0 a3=0 items=0 ppid=1887 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.086000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:23:55.089000 audit[1960]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.089000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffe8298a10 a2=0 a3=0 items=0 ppid=1887 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:23:55.132000 audit[1990]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.132000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc064e5b00 a2=0 a3=0 items=0 ppid=1887 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:23:55.135000 audit[1992]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.135000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc77226320 a2=0 a3=0 items=0 ppid=1887 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.135000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:23:55.137000 audit[1994]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.137000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc95b76d0 a2=0 a3=0 items=0 ppid=1887 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.137000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:23:55.140000 audit[1996]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.140000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeecb28620 a2=0 a3=0 items=0 ppid=1887 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:23:55.142000 audit[1998]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.142000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4b428bd0 a2=0 a3=0 items=0 ppid=1887 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:23:55.144000 audit[2000]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.144000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7526b3d0 a2=0 a3=0 items=0 ppid=1887 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:23:55.147000 audit[2002]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.147000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd7e2f4b30 a2=0 a3=0 items=0 ppid=1887 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:23:55.149000 audit[2004]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.149000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd88cf8a10 a2=0 a3=0 items=0 ppid=1887 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:23:55.152000 audit[2006]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.152000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffed61d3f80 a2=0 a3=0 items=0 ppid=1887 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 18:23:55.154000 audit[2008]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.154000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffddd3f1930 a2=0 a3=0 items=0 ppid=1887 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:23:55.157000 audit[2010]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.157000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc83d7d1c0 a2=0 a3=0 items=0 ppid=1887 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:23:55.159000 audit[2012]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.159000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc92a36370 a2=0 a3=0 items=0 ppid=1887 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:23:55.161000 audit[2014]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.161000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffccc69cf10 a2=0 a3=0 items=0 ppid=1887 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:23:55.167000 audit[2019]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.167000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca10502d0 a2=0 a3=0 items=0 ppid=1887 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:23:55.170000 audit[2021]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.170000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffffb7b9630 a2=0 a3=0 items=0 ppid=1887 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:23:55.172000 audit[2023]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.172000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdc7b4aa10 a2=0 a3=0 items=0 ppid=1887 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:23:55.175000 audit[2025]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.175000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc4f640bf0 a2=0 a3=0 items=0 ppid=1887 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:23:55.177000 audit[2027]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.177000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffea98f7330 a2=0 a3=0 items=0 ppid=1887 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:23:55.179000 audit[2029]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:23:55.179000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd536b1f10 a2=0 a3=0 items=0 ppid=1887 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:23:55.192000 audit[2034]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.192000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd3b4c8fb0 a2=0 a3=0 items=0 ppid=1887 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.192000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 18:23:55.198000 audit[2036]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.198000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd50748520 a2=0 a3=0 items=0 ppid=1887 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 18:23:55.210000 audit[2044]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.210000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffec5013c30 a2=0 a3=0 items=0 ppid=1887 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.210000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 18:23:55.220000 audit[2050]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.220000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff3c5de0f0 a2=0 a3=0 items=0 ppid=1887 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.220000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 18:23:55.223000 audit[2052]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.223000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe54c7e0e0 a2=0 a3=0 items=0 ppid=1887 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 18:23:55.225000 audit[2054]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.225000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd8acd6590 a2=0 a3=0 items=0 ppid=1887 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.225000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 18:23:55.228000 audit[2056]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.228000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdc77a9a00 a2=0 a3=0 items=0 ppid=1887 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:23:55.230000 audit[2058]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:23:55.230000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd76263ff0 a2=0 a3=0 items=0 ppid=1887 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:23:55.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 18:23:55.232582 systemd-networkd[1517]: docker0: Link UP Dec 12 18:23:55.237487 dockerd[1887]: time="2025-12-12T18:23:55.237451484Z" level=info msg="Loading containers: done." Dec 12 18:23:55.254434 dockerd[1887]: time="2025-12-12T18:23:55.254391233Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:23:55.254608 dockerd[1887]: time="2025-12-12T18:23:55.254456883Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:23:55.254608 dockerd[1887]: time="2025-12-12T18:23:55.254538143Z" level=info msg="Initializing buildkit" Dec 12 18:23:55.274933 dockerd[1887]: time="2025-12-12T18:23:55.274911513Z" level=info msg="Completed buildkit initialization" Dec 12 18:23:55.280586 dockerd[1887]: time="2025-12-12T18:23:55.280553496Z" level=info msg="Daemon has completed initialization" Dec 12 18:23:55.280687 dockerd[1887]: time="2025-12-12T18:23:55.280658146Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:23:55.280817 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:23:55.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:55.916276 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1823108842-merged.mount: Deactivated successfully. Dec 12 18:23:55.971837 containerd[1630]: time="2025-12-12T18:23:55.971788951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 18:23:57.094602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222487725.mount: Deactivated successfully. Dec 12 18:23:57.843305 containerd[1630]: time="2025-12-12T18:23:57.843250386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:23:57.844340 containerd[1630]: time="2025-12-12T18:23:57.844098807Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399329" Dec 12 18:23:57.844937 containerd[1630]: time="2025-12-12T18:23:57.844897337Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:23:57.847126 containerd[1630]: time="2025-12-12T18:23:57.847098038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:23:57.848097 containerd[1630]: time="2025-12-12T18:23:57.848071519Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.876249468s" Dec 12 18:23:57.848146 containerd[1630]: time="2025-12-12T18:23:57.848102269Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 12 18:23:57.848963 containerd[1630]: time="2025-12-12T18:23:57.848945529Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 18:23:59.368814 containerd[1630]: time="2025-12-12T18:23:59.368751579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:23:59.370914 containerd[1630]: time="2025-12-12T18:23:59.370692450Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Dec 12 18:23:59.371727 containerd[1630]: time="2025-12-12T18:23:59.371698950Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:23:59.374093 containerd[1630]: time="2025-12-12T18:23:59.374072661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:23:59.375091 containerd[1630]: time="2025-12-12T18:23:59.375066432Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.525959283s" Dec 12 18:23:59.375178 containerd[1630]: time="2025-12-12T18:23:59.375156522Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 12 18:23:59.375680 containerd[1630]: time="2025-12-12T18:23:59.375617502Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 18:23:59.756047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:23:59.757775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:23:59.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:59.950230 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 18:23:59.950290 kernel: audit: type=1130 audit(1765563839.948:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:23:59.949032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:23:59.960307 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:24:00.003563 kubelet[2165]: E1212 18:24:00.003505 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:24:00.008484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:24:00.008690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:24:00.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:24:00.009216 systemd[1]: kubelet.service: Consumed 207ms CPU time, 110.2M memory peak. Dec 12 18:24:00.016133 kernel: audit: type=1131 audit(1765563840.008:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:24:00.499005 containerd[1630]: time="2025-12-12T18:24:00.498730573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:00.499956 containerd[1630]: time="2025-12-12T18:24:00.499772474Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Dec 12 18:24:00.500443 containerd[1630]: time="2025-12-12T18:24:00.500417764Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:00.503267 containerd[1630]: time="2025-12-12T18:24:00.503246175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:00.504028 containerd[1630]: time="2025-12-12T18:24:00.504007696Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.128357634s" Dec 12 18:24:00.504607 containerd[1630]: time="2025-12-12T18:24:00.504091926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 12 18:24:00.504818 containerd[1630]: time="2025-12-12T18:24:00.504803046Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 18:24:01.691005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748215678.mount: Deactivated successfully. Dec 12 18:24:01.943937 containerd[1630]: time="2025-12-12T18:24:01.943806055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:01.944752 containerd[1630]: time="2025-12-12T18:24:01.944651976Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=11585785" Dec 12 18:24:01.945326 containerd[1630]: time="2025-12-12T18:24:01.945299976Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:01.946726 containerd[1630]: time="2025-12-12T18:24:01.946688007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:01.947304 containerd[1630]: time="2025-12-12T18:24:01.947274217Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.442385531s" Dec 12 18:24:01.947383 containerd[1630]: time="2025-12-12T18:24:01.947303337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 12 18:24:01.948031 containerd[1630]: time="2025-12-12T18:24:01.948012177Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 18:24:02.526683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494473252.mount: Deactivated successfully. Dec 12 18:24:03.312368 containerd[1630]: time="2025-12-12T18:24:03.312285479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:03.313454 containerd[1630]: time="2025-12-12T18:24:03.313206039Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568511" Dec 12 18:24:03.314063 containerd[1630]: time="2025-12-12T18:24:03.314033580Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:03.316191 containerd[1630]: time="2025-12-12T18:24:03.316153031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:03.317052 containerd[1630]: time="2025-12-12T18:24:03.317021781Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.368986644s" Dec 12 18:24:03.317097 containerd[1630]: time="2025-12-12T18:24:03.317054111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 12 18:24:03.317836 containerd[1630]: time="2025-12-12T18:24:03.317815092Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 18:24:03.935849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734477328.mount: Deactivated successfully. Dec 12 18:24:03.941185 containerd[1630]: time="2025-12-12T18:24:03.941146173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:03.941845 containerd[1630]: time="2025-12-12T18:24:03.941816624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 12 18:24:03.942443 containerd[1630]: time="2025-12-12T18:24:03.942388554Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:03.944318 containerd[1630]: time="2025-12-12T18:24:03.944280325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:03.945299 containerd[1630]: time="2025-12-12T18:24:03.944884495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 627.042853ms" Dec 12 18:24:03.945299 containerd[1630]: time="2025-12-12T18:24:03.944914745Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 12 18:24:03.945570 containerd[1630]: time="2025-12-12T18:24:03.945523965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 18:24:04.747635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110868652.mount: Deactivated successfully. Dec 12 18:24:06.643036 containerd[1630]: time="2025-12-12T18:24:06.642945503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:06.644582 containerd[1630]: time="2025-12-12T18:24:06.644555364Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Dec 12 18:24:06.644886 containerd[1630]: time="2025-12-12T18:24:06.644846724Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:06.647957 containerd[1630]: time="2025-12-12T18:24:06.647913066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:06.648944 containerd[1630]: time="2025-12-12T18:24:06.648830566Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.703277191s" Dec 12 18:24:06.648944 containerd[1630]: time="2025-12-12T18:24:06.648857006Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 12 18:24:10.256044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:24:10.261231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:24:10.342569 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:24:10.342815 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:24:10.343439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:24:10.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:24:10.351289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:24:10.352004 kernel: audit: type=1130 audit(1765563850.342:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:24:10.384296 systemd[1]: Reload requested from client PID 2325 ('systemctl') (unit session-7.scope)... Dec 12 18:24:10.384429 systemd[1]: Reloading... Dec 12 18:24:10.532025 zram_generator::config[2368]: No configuration found. Dec 12 18:24:10.843079 systemd[1]: Reloading finished in 458 ms. Dec 12 18:24:10.875000 audit: BPF prog-id=67 op=LOAD Dec 12 18:24:10.880041 kernel: audit: type=1334 audit(1765563850.875:291): prog-id=67 op=LOAD Dec 12 18:24:10.878000 audit: BPF prog-id=50 op=UNLOAD Dec 12 18:24:10.878000 audit: BPF prog-id=68 op=LOAD Dec 12 18:24:10.878000 audit: BPF prog-id=69 op=LOAD Dec 12 18:24:10.878000 audit: BPF prog-id=51 op=UNLOAD Dec 12 18:24:10.878000 audit: BPF prog-id=52 op=UNLOAD Dec 12 18:24:10.884805 kernel: audit: type=1334 audit(1765563850.878:292): prog-id=50 op=UNLOAD Dec 12 18:24:10.884893 kernel: audit: type=1334 audit(1765563850.878:293): prog-id=68 op=LOAD Dec 12 18:24:10.884919 kernel: audit: type=1334 audit(1765563850.878:294): prog-id=69 op=LOAD Dec 12 18:24:10.884953 kernel: audit: type=1334 audit(1765563850.878:295): prog-id=51 op=UNLOAD Dec 12 18:24:10.885001 kernel: audit: type=1334 audit(1765563850.878:296): prog-id=52 op=UNLOAD Dec 12 18:24:10.885031 kernel: audit: type=1334 audit(1765563850.880:297): prog-id=70 op=LOAD Dec 12 18:24:10.880000 audit: BPF prog-id=70 op=LOAD Dec 12 18:24:10.880000 audit: BPF prog-id=47 op=UNLOAD Dec 12 18:24:10.894042 kernel: audit: type=1334 audit(1765563850.880:298): prog-id=47 op=UNLOAD Dec 12 18:24:10.880000 audit: BPF prog-id=71 op=LOAD Dec 12 18:24:10.880000 audit: BPF prog-id=72 op=LOAD Dec 12 18:24:10.880000 audit: BPF prog-id=48 op=UNLOAD Dec 12 18:24:10.880000 audit: BPF prog-id=49 op=UNLOAD Dec 12 18:24:10.884000 audit: BPF prog-id=73 op=LOAD Dec 12 18:24:10.895000 audit: BPF prog-id=58 op=UNLOAD Dec 12 18:24:10.897000 audit: BPF prog-id=74 op=LOAD Dec 12 18:24:10.897000 audit: BPF prog-id=75 op=LOAD Dec 12 18:24:10.897000 audit: BPF prog-id=56 op=UNLOAD Dec 12 18:24:10.897000 audit: BPF prog-id=57 op=UNLOAD Dec 12 18:24:10.899000 audit: BPF prog-id=76 op=LOAD Dec 12 18:24:10.899000 audit: BPF prog-id=60 op=UNLOAD Dec 12 18:24:10.899000 audit: BPF prog-id=77 op=LOAD Dec 12 18:24:10.899000 audit: BPF prog-id=78 op=LOAD Dec 12 18:24:10.899000 audit: BPF prog-id=61 op=UNLOAD Dec 12 18:24:10.899000 audit: BPF prog-id=62 op=UNLOAD Dec 12 18:24:10.901029 kernel: audit: type=1334 audit(1765563850.880:299): prog-id=71 op=LOAD Dec 12 18:24:10.900000 audit: BPF prog-id=79 op=LOAD Dec 12 18:24:10.900000 audit: BPF prog-id=59 op=UNLOAD Dec 12 18:24:10.901000 audit: BPF prog-id=80 op=LOAD Dec 12 18:24:10.901000 audit: BPF prog-id=53 op=UNLOAD Dec 12 18:24:10.901000 audit: BPF prog-id=81 op=LOAD Dec 12 18:24:10.901000 audit: BPF prog-id=82 op=LOAD Dec 12 18:24:10.901000 audit: BPF prog-id=54 op=UNLOAD Dec 12 18:24:10.902000 audit: BPF prog-id=55 op=UNLOAD Dec 12 18:24:10.903000 audit: BPF prog-id=83 op=LOAD Dec 12 18:24:10.903000 audit: BPF prog-id=63 op=UNLOAD Dec 12 18:24:10.903000 audit: BPF prog-id=84 op=LOAD Dec 12 18:24:10.903000 audit: BPF prog-id=85 op=LOAD Dec 12 18:24:10.903000 audit: BPF prog-id=64 op=UNLOAD Dec 12 18:24:10.903000 audit: BPF prog-id=65 op=UNLOAD Dec 12 18:24:10.904000 audit: BPF prog-id=86 op=LOAD Dec 12 18:24:10.904000 audit: BPF prog-id=66 op=UNLOAD Dec 12 18:24:10.905000 audit: BPF prog-id=87 op=LOAD Dec 12 18:24:10.905000 audit: BPF prog-id=43 op=UNLOAD Dec 12 18:24:10.906000 audit: BPF prog-id=88 op=LOAD Dec 12 18:24:10.906000 audit: BPF prog-id=44 op=UNLOAD Dec 12 18:24:10.906000 audit: BPF prog-id=89 op=LOAD Dec 12 18:24:10.906000 audit: BPF prog-id=90 op=LOAD Dec 12 18:24:10.906000 audit: BPF prog-id=45 op=UNLOAD Dec 12 18:24:10.906000 audit: BPF prog-id=46 op=UNLOAD Dec 12 18:24:10.927859 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:24:10.927961 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:24:10.928736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:24:10.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:24:10.928792 systemd[1]: kubelet.service: Consumed 144ms CPU time, 98.3M memory peak. Dec 12 18:24:10.932403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:24:11.105976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:24:11.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:11.116272 (kubelet)[2427]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:24:11.155014 kubelet[2427]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:24:11.155014 kubelet[2427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:24:11.155014 kubelet[2427]: I1212 18:24:11.154841 2427 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:24:11.427055 kubelet[2427]: I1212 18:24:11.426946 2427 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:24:11.427526 kubelet[2427]: I1212 18:24:11.427413 2427 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:24:11.429787 kubelet[2427]: I1212 18:24:11.429774 2427 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:24:11.429921 kubelet[2427]: I1212 18:24:11.429907 2427 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:24:11.430179 kubelet[2427]: I1212 18:24:11.430165 2427 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:24:11.435633 kubelet[2427]: E1212 18:24:11.435462 2427 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.207.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:24:11.435633 kubelet[2427]: I1212 18:24:11.435614 2427 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:24:11.441439 kubelet[2427]: I1212 18:24:11.441416 2427 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:24:11.446840 kubelet[2427]: I1212 18:24:11.446495 2427 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:24:11.447621 kubelet[2427]: I1212 18:24:11.447591 2427 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:24:11.447811 kubelet[2427]: I1212 18:24:11.447672 2427 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-207-166","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:24:11.447964 kubelet[2427]: I1212 18:24:11.447951 2427 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:24:11.448037 kubelet[2427]: I1212 18:24:11.448028 2427 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:24:11.448167 kubelet[2427]: I1212 18:24:11.448155 2427 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:24:11.449879 kubelet[2427]: I1212 18:24:11.449864 2427 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:24:11.450154 kubelet[2427]: I1212 18:24:11.450143 2427 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:24:11.450569 kubelet[2427]: I1212 18:24:11.450554 2427 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:24:11.450636 kubelet[2427]: I1212 18:24:11.450627 2427 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:24:11.450712 kubelet[2427]: I1212 18:24:11.450702 2427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:24:11.453915 kubelet[2427]: E1212 18:24:11.450696 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.207.166:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-207-166&limit=500&resourceVersion=0\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:24:11.454212 kubelet[2427]: I1212 18:24:11.454199 2427 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:24:11.454647 kubelet[2427]: I1212 18:24:11.454631 2427 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:24:11.454739 kubelet[2427]: I1212 18:24:11.454729 2427 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:24:11.454821 kubelet[2427]: W1212 18:24:11.454810 2427 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:24:11.455786 kubelet[2427]: E1212 18:24:11.455745 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.207.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:24:11.459592 kubelet[2427]: I1212 18:24:11.459576 2427 server.go:1262] "Started kubelet" Dec 12 18:24:11.464608 kubelet[2427]: I1212 18:24:11.464570 2427 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:24:11.466616 kubelet[2427]: I1212 18:24:11.466596 2427 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:24:11.466909 kubelet[2427]: I1212 18:24:11.466878 2427 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:24:11.471701 kubelet[2427]: I1212 18:24:11.471656 2427 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:24:11.471750 kubelet[2427]: I1212 18:24:11.471712 2427 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:24:11.471921 kubelet[2427]: I1212 18:24:11.471891 2427 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:24:11.473434 kubelet[2427]: E1212 18:24:11.472196 2427 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.207.166:6443/api/v1/namespaces/default/events\": dial tcp 172.234.207.166:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-207-166.18808af7f284110c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-207-166,UID:172-234-207-166,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-207-166,},FirstTimestamp:2025-12-12 18:24:11.45953102 +0000 UTC m=+0.338871591,LastTimestamp:2025-12-12 18:24:11.45953102 +0000 UTC m=+0.338871591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-207-166,}" Dec 12 18:24:11.473924 kubelet[2427]: I1212 18:24:11.473702 2427 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:24:11.473000 audit[2443]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:11.473000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7a2671e0 a2=0 a3=0 items=0 ppid=2427 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:24:11.475596 kubelet[2427]: I1212 18:24:11.475576 2427 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:24:11.475000 audit[2444]: NETFILTER_CFG table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:11.475000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc72d640c0 a2=0 a3=0 items=0 ppid=2427 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:24:11.478291 kubelet[2427]: E1212 18:24:11.478259 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:11.478338 kubelet[2427]: I1212 18:24:11.478304 2427 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:24:11.478474 kubelet[2427]: I1212 18:24:11.478454 2427 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:24:11.478511 kubelet[2427]: I1212 18:24:11.478496 2427 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:24:11.477000 audit[2445]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:11.477000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3f774450 a2=0 a3=0 items=0 ppid=2427 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.478776 kubelet[2427]: E1212 18:24:11.478763 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.207.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:24:11.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:24:11.479038 kubelet[2427]: E1212 18:24:11.478961 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.207.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-207-166?timeout=10s\": dial tcp 172.234.207.166:6443: connect: connection refused" interval="200ms" Dec 12 18:24:11.479000 audit[2446]: NETFILTER_CFG table=mangle:45 family=10 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:11.479000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8c715f10 a2=0 a3=0 items=0 ppid=2427 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:24:11.480000 audit[2449]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:11.480000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9825d650 a2=0 a3=0 items=0 ppid=2427 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:24:11.480000 audit[2448]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:11.480000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffb8b427d0 a2=0 a3=0 items=0 ppid=2427 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:24:11.481000 audit[2450]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:11.481000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc2ce5090 a2=0 a3=0 items=0 ppid=2427 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:24:11.483167 kubelet[2427]: I1212 18:24:11.483097 2427 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:24:11.483167 kubelet[2427]: I1212 18:24:11.483154 2427 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:24:11.483501 kubelet[2427]: E1212 18:24:11.483478 2427 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:24:11.484234 kubelet[2427]: I1212 18:24:11.484191 2427 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:24:11.484000 audit[2452]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:11.484000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffd0db0fb0 a2=0 a3=0 items=0 ppid=2427 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:24:11.493000 audit[2455]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:11.493000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc811f2b20 a2=0 a3=0 items=0 ppid=2427 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 12 18:24:11.494970 kubelet[2427]: I1212 18:24:11.494948 2427 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:24:11.494970 kubelet[2427]: I1212 18:24:11.494967 2427 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:24:11.495613 kubelet[2427]: I1212 18:24:11.495006 2427 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:24:11.495613 kubelet[2427]: E1212 18:24:11.495266 2427 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:24:11.495000 audit[2456]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:11.495000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca1677000 a2=0 a3=0 items=0 ppid=2427 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:24:11.497000 audit[2458]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:11.497000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2de69520 a2=0 a3=0 items=0 ppid=2427 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:24:11.498000 audit[2459]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:11.498000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffbed3da70 a2=0 a3=0 items=0 ppid=2427 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:11.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:24:11.499839 kubelet[2427]: E1212 18:24:11.499821 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.207.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:24:11.509580 kubelet[2427]: I1212 18:24:11.509553 2427 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:24:11.510216 kubelet[2427]: I1212 18:24:11.509757 2427 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:24:11.510216 kubelet[2427]: I1212 18:24:11.509774 2427 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:24:11.511826 kubelet[2427]: I1212 18:24:11.511813 2427 policy_none.go:49] "None policy: Start" Dec 12 18:24:11.511910 kubelet[2427]: I1212 18:24:11.511899 2427 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:24:11.511971 kubelet[2427]: I1212 18:24:11.511957 2427 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:24:11.513026 kubelet[2427]: I1212 18:24:11.513015 2427 policy_none.go:47] "Start" Dec 12 18:24:11.518733 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:24:11.547169 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:24:11.551270 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:24:11.569784 kubelet[2427]: E1212 18:24:11.569757 2427 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:24:11.570282 kubelet[2427]: I1212 18:24:11.569957 2427 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:24:11.570327 kubelet[2427]: I1212 18:24:11.570283 2427 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:24:11.570703 kubelet[2427]: I1212 18:24:11.570511 2427 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:24:11.572834 kubelet[2427]: E1212 18:24:11.572810 2427 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:24:11.572885 kubelet[2427]: E1212 18:24:11.572852 2427 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-207-166\" not found" Dec 12 18:24:11.606518 systemd[1]: Created slice kubepods-burstable-pod0c432ce3a4da4457142041c4dcc96530.slice - libcontainer container kubepods-burstable-pod0c432ce3a4da4457142041c4dcc96530.slice. Dec 12 18:24:11.613173 kubelet[2427]: E1212 18:24:11.613056 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:11.619409 systemd[1]: Created slice kubepods-burstable-pod63bab591eec7fa324a7bc4de01137d68.slice - libcontainer container kubepods-burstable-pod63bab591eec7fa324a7bc4de01137d68.slice. Dec 12 18:24:11.628917 systemd[1]: Created slice kubepods-burstable-pod19755d1202fb10d747c26581c9ff6e8a.slice - libcontainer container kubepods-burstable-pod19755d1202fb10d747c26581c9ff6e8a.slice. Dec 12 18:24:11.630086 kubelet[2427]: E1212 18:24:11.629960 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:11.632118 kubelet[2427]: E1212 18:24:11.632105 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:11.672831 kubelet[2427]: I1212 18:24:11.672798 2427 kubelet_node_status.go:75] "Attempting to register node" node="172-234-207-166" Dec 12 18:24:11.673247 kubelet[2427]: E1212 18:24:11.673221 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.207.166:6443/api/v1/nodes\": dial tcp 172.234.207.166:6443: connect: connection refused" node="172-234-207-166" Dec 12 18:24:11.679664 kubelet[2427]: E1212 18:24:11.679586 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.207.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-207-166?timeout=10s\": dial tcp 172.234.207.166:6443: connect: connection refused" interval="400ms" Dec 12 18:24:11.679709 kubelet[2427]: I1212 18:24:11.679654 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-k8s-certs\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:11.679709 kubelet[2427]: I1212 18:24:11.679689 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-kubeconfig\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:11.679758 kubelet[2427]: I1212 18:24:11.679721 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:11.679780 kubelet[2427]: I1212 18:24:11.679753 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c432ce3a4da4457142041c4dcc96530-ca-certs\") pod \"kube-apiserver-172-234-207-166\" (UID: \"0c432ce3a4da4457142041c4dcc96530\") " pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:11.679852 kubelet[2427]: I1212 18:24:11.679778 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c432ce3a4da4457142041c4dcc96530-k8s-certs\") pod \"kube-apiserver-172-234-207-166\" (UID: \"0c432ce3a4da4457142041c4dcc96530\") " pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:11.679852 kubelet[2427]: I1212 18:24:11.679802 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-ca-certs\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:11.679852 kubelet[2427]: I1212 18:24:11.679825 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19755d1202fb10d747c26581c9ff6e8a-kubeconfig\") pod \"kube-scheduler-172-234-207-166\" (UID: \"19755d1202fb10d747c26581c9ff6e8a\") " pod="kube-system/kube-scheduler-172-234-207-166" Dec 12 18:24:11.679852 kubelet[2427]: I1212 18:24:11.679847 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c432ce3a4da4457142041c4dcc96530-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-207-166\" (UID: \"0c432ce3a4da4457142041c4dcc96530\") " pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:11.679941 kubelet[2427]: I1212 18:24:11.679870 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-flexvolume-dir\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:11.875322 kubelet[2427]: I1212 18:24:11.875085 2427 kubelet_node_status.go:75] "Attempting to register node" node="172-234-207-166" Dec 12 18:24:11.875436 kubelet[2427]: E1212 18:24:11.875393 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.207.166:6443/api/v1/nodes\": dial tcp 172.234.207.166:6443: connect: connection refused" node="172-234-207-166" Dec 12 18:24:11.915575 kubelet[2427]: E1212 18:24:11.915537 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:11.916812 containerd[1630]: time="2025-12-12T18:24:11.916646888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-207-166,Uid:0c432ce3a4da4457142041c4dcc96530,Namespace:kube-system,Attempt:0,}" Dec 12 18:24:11.933245 kubelet[2427]: E1212 18:24:11.932917 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:11.933740 containerd[1630]: time="2025-12-12T18:24:11.933599437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-207-166,Uid:63bab591eec7fa324a7bc4de01137d68,Namespace:kube-system,Attempt:0,}" Dec 12 18:24:11.934758 kubelet[2427]: E1212 18:24:11.934743 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:11.935401 containerd[1630]: time="2025-12-12T18:24:11.935311818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-207-166,Uid:19755d1202fb10d747c26581c9ff6e8a,Namespace:kube-system,Attempt:0,}" Dec 12 18:24:12.080670 kubelet[2427]: E1212 18:24:12.080602 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.207.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-207-166?timeout=10s\": dial tcp 172.234.207.166:6443: connect: connection refused" interval="800ms" Dec 12 18:24:12.277881 kubelet[2427]: I1212 18:24:12.277842 2427 kubelet_node_status.go:75] "Attempting to register node" node="172-234-207-166" Dec 12 18:24:12.278254 kubelet[2427]: E1212 18:24:12.278196 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.207.166:6443/api/v1/nodes\": dial tcp 172.234.207.166:6443: connect: connection refused" node="172-234-207-166" Dec 12 18:24:12.501574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765942088.mount: Deactivated successfully. Dec 12 18:24:12.508018 containerd[1630]: time="2025-12-12T18:24:12.507837914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:24:12.509010 containerd[1630]: time="2025-12-12T18:24:12.508969664Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:24:12.514618 containerd[1630]: time="2025-12-12T18:24:12.514573537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:24:12.515672 containerd[1630]: time="2025-12-12T18:24:12.515310817Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:24:12.517002 containerd[1630]: time="2025-12-12T18:24:12.516823118Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:24:12.519917 containerd[1630]: time="2025-12-12T18:24:12.519882920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:24:12.520164 containerd[1630]: time="2025-12-12T18:24:12.520131980Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:24:12.522226 containerd[1630]: time="2025-12-12T18:24:12.522184361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:24:12.523018 containerd[1630]: time="2025-12-12T18:24:12.522970121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 585.491712ms" Dec 12 18:24:12.524015 containerd[1630]: time="2025-12-12T18:24:12.523964682Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.384824ms" Dec 12 18:24:12.524888 containerd[1630]: time="2025-12-12T18:24:12.524864082Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 602.972961ms" Dec 12 18:24:12.547663 kubelet[2427]: E1212 18:24:12.543463 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.207.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:24:12.558116 containerd[1630]: time="2025-12-12T18:24:12.558060029Z" level=info msg="connecting to shim b6eb6277689039c02a879e1a86695e3d721b02f08480626560c147a456709efe" address="unix:///run/containerd/s/816d71e28ee3a49220fc17e0448fc6366f4e648ed7d3fad355f7ec4517868665" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:12.564803 containerd[1630]: time="2025-12-12T18:24:12.564775192Z" level=info msg="connecting to shim 4d1af3ce212bcc7373a7fb893cdabc17605df96322cef8ae94dbfd417d7f8c1b" address="unix:///run/containerd/s/2340b5bedb9e0e95135cce07d5a15705b821ec81cf237ed4b992cc75c8ee1d80" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:12.578183 containerd[1630]: time="2025-12-12T18:24:12.578146129Z" level=info msg="connecting to shim d014b91be878e96e43f1e78c1edf5cbe0bd678d7147dca16222a52d1ca89f853" address="unix:///run/containerd/s/523fdaeb41e70ba342d533a127aab96cfec6e3727d3f958c4777948889a5e593" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:12.596270 systemd[1]: Started cri-containerd-b6eb6277689039c02a879e1a86695e3d721b02f08480626560c147a456709efe.scope - libcontainer container b6eb6277689039c02a879e1a86695e3d721b02f08480626560c147a456709efe. Dec 12 18:24:12.619165 systemd[1]: Started cri-containerd-4d1af3ce212bcc7373a7fb893cdabc17605df96322cef8ae94dbfd417d7f8c1b.scope - libcontainer container 4d1af3ce212bcc7373a7fb893cdabc17605df96322cef8ae94dbfd417d7f8c1b. Dec 12 18:24:12.623702 systemd[1]: Started cri-containerd-d014b91be878e96e43f1e78c1edf5cbe0bd678d7147dca16222a52d1ca89f853.scope - libcontainer container d014b91be878e96e43f1e78c1edf5cbe0bd678d7147dca16222a52d1ca89f853. Dec 12 18:24:12.638000 audit: BPF prog-id=91 op=LOAD Dec 12 18:24:12.639000 audit: BPF prog-id=92 op=LOAD Dec 12 18:24:12.639000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236656236323737363839303339633032613837396531613836363935 Dec 12 18:24:12.639000 audit: BPF prog-id=92 op=UNLOAD Dec 12 18:24:12.639000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236656236323737363839303339633032613837396531613836363935 Dec 12 18:24:12.639000 audit: BPF prog-id=93 op=LOAD Dec 12 18:24:12.639000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236656236323737363839303339633032613837396531613836363935 Dec 12 18:24:12.639000 audit: BPF prog-id=94 op=LOAD Dec 12 18:24:12.639000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236656236323737363839303339633032613837396531613836363935 Dec 12 18:24:12.639000 audit: BPF prog-id=94 op=UNLOAD Dec 12 18:24:12.639000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236656236323737363839303339633032613837396531613836363935 Dec 12 18:24:12.639000 audit: BPF prog-id=93 op=UNLOAD Dec 12 18:24:12.639000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236656236323737363839303339633032613837396531613836363935 Dec 12 18:24:12.639000 audit: BPF prog-id=95 op=LOAD Dec 12 18:24:12.639000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236656236323737363839303339633032613837396531613836363935 Dec 12 18:24:12.649000 audit: BPF prog-id=96 op=LOAD Dec 12 18:24:12.649000 audit: BPF prog-id=97 op=LOAD Dec 12 18:24:12.649000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2491 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316166336365323132626363373337336137666238393363646162 Dec 12 18:24:12.649000 audit: BPF prog-id=97 op=UNLOAD Dec 12 18:24:12.649000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2491 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316166336365323132626363373337336137666238393363646162 Dec 12 18:24:12.650000 audit: BPF prog-id=98 op=LOAD Dec 12 18:24:12.650000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2491 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316166336365323132626363373337336137666238393363646162 Dec 12 18:24:12.650000 audit: BPF prog-id=99 op=LOAD Dec 12 18:24:12.650000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2491 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316166336365323132626363373337336137666238393363646162 Dec 12 18:24:12.651000 audit: BPF prog-id=99 op=UNLOAD Dec 12 18:24:12.651000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2491 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316166336365323132626363373337336137666238393363646162 Dec 12 18:24:12.651000 audit: BPF prog-id=98 op=UNLOAD Dec 12 18:24:12.651000 audit[2525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2491 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316166336365323132626363373337336137666238393363646162 Dec 12 18:24:12.651000 audit: BPF prog-id=100 op=LOAD Dec 12 18:24:12.652000 audit: BPF prog-id=101 op=LOAD Dec 12 18:24:12.652000 audit[2542]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430313462393162653837386539366534336631653738633165646635 Dec 12 18:24:12.652000 audit: BPF prog-id=101 op=UNLOAD Dec 12 18:24:12.652000 audit[2542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430313462393162653837386539366534336631653738633165646635 Dec 12 18:24:12.652000 audit: BPF prog-id=102 op=LOAD Dec 12 18:24:12.652000 audit[2542]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430313462393162653837386539366534336631653738633165646635 Dec 12 18:24:12.652000 audit: BPF prog-id=103 op=LOAD Dec 12 18:24:12.652000 audit[2542]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430313462393162653837386539366534336631653738633165646635 Dec 12 18:24:12.652000 audit: BPF prog-id=103 op=UNLOAD Dec 12 18:24:12.652000 audit[2542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430313462393162653837386539366534336631653738633165646635 Dec 12 18:24:12.652000 audit: BPF prog-id=102 op=UNLOAD Dec 12 18:24:12.652000 audit[2542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430313462393162653837386539366534336631653738633165646635 Dec 12 18:24:12.651000 audit: BPF prog-id=104 op=LOAD Dec 12 18:24:12.651000 audit[2525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2491 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464316166336365323132626363373337336137666238393363646162 Dec 12 18:24:12.652000 audit: BPF prog-id=105 op=LOAD Dec 12 18:24:12.652000 audit[2542]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430313462393162653837386539366534336631653738633165646635 Dec 12 18:24:12.674961 kubelet[2427]: E1212 18:24:12.674923 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.207.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:24:12.709948 containerd[1630]: time="2025-12-12T18:24:12.709835375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-207-166,Uid:63bab591eec7fa324a7bc4de01137d68,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6eb6277689039c02a879e1a86695e3d721b02f08480626560c147a456709efe\"" Dec 12 18:24:12.714407 kubelet[2427]: E1212 18:24:12.713955 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:12.723202 containerd[1630]: time="2025-12-12T18:24:12.723176291Z" level=info msg="CreateContainer within sandbox \"b6eb6277689039c02a879e1a86695e3d721b02f08480626560c147a456709efe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:24:12.734164 containerd[1630]: time="2025-12-12T18:24:12.734043407Z" level=info msg="Container 74406e7173a8f0b77af9a1975193c4b363a3274c1e4e934a52c786e6c0914eaf: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:12.742240 kubelet[2427]: E1212 18:24:12.742199 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.207.166:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:24:12.745419 containerd[1630]: time="2025-12-12T18:24:12.745355182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-207-166,Uid:19755d1202fb10d747c26581c9ff6e8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d014b91be878e96e43f1e78c1edf5cbe0bd678d7147dca16222a52d1ca89f853\"" Dec 12 18:24:12.745750 containerd[1630]: time="2025-12-12T18:24:12.745719432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-207-166,Uid:0c432ce3a4da4457142041c4dcc96530,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1af3ce212bcc7373a7fb893cdabc17605df96322cef8ae94dbfd417d7f8c1b\"" Dec 12 18:24:12.747510 kubelet[2427]: E1212 18:24:12.747146 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:12.747580 kubelet[2427]: E1212 18:24:12.747566 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:12.748610 containerd[1630]: time="2025-12-12T18:24:12.748579734Z" level=info msg="CreateContainer within sandbox \"b6eb6277689039c02a879e1a86695e3d721b02f08480626560c147a456709efe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74406e7173a8f0b77af9a1975193c4b363a3274c1e4e934a52c786e6c0914eaf\"" Dec 12 18:24:12.749862 containerd[1630]: time="2025-12-12T18:24:12.749837664Z" level=info msg="StartContainer for \"74406e7173a8f0b77af9a1975193c4b363a3274c1e4e934a52c786e6c0914eaf\"" Dec 12 18:24:12.751452 containerd[1630]: time="2025-12-12T18:24:12.751426245Z" level=info msg="CreateContainer within sandbox \"4d1af3ce212bcc7373a7fb893cdabc17605df96322cef8ae94dbfd417d7f8c1b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:24:12.751830 containerd[1630]: time="2025-12-12T18:24:12.751799345Z" level=info msg="connecting to shim 74406e7173a8f0b77af9a1975193c4b363a3274c1e4e934a52c786e6c0914eaf" address="unix:///run/containerd/s/816d71e28ee3a49220fc17e0448fc6366f4e648ed7d3fad355f7ec4517868665" protocol=ttrpc version=3 Dec 12 18:24:12.754018 containerd[1630]: time="2025-12-12T18:24:12.753352616Z" level=info msg="CreateContainer within sandbox \"d014b91be878e96e43f1e78c1edf5cbe0bd678d7147dca16222a52d1ca89f853\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:24:12.759025 containerd[1630]: time="2025-12-12T18:24:12.758994779Z" level=info msg="Container c958af8ac1ec165963f94fd1f1745f4af818844aa4b9cde7780b04636eaa636f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:12.775266 systemd[1]: Started cri-containerd-74406e7173a8f0b77af9a1975193c4b363a3274c1e4e934a52c786e6c0914eaf.scope - libcontainer container 74406e7173a8f0b77af9a1975193c4b363a3274c1e4e934a52c786e6c0914eaf. Dec 12 18:24:12.780818 containerd[1630]: time="2025-12-12T18:24:12.780771510Z" level=info msg="CreateContainer within sandbox \"4d1af3ce212bcc7373a7fb893cdabc17605df96322cef8ae94dbfd417d7f8c1b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c958af8ac1ec165963f94fd1f1745f4af818844aa4b9cde7780b04636eaa636f\"" Dec 12 18:24:12.781949 containerd[1630]: time="2025-12-12T18:24:12.781747270Z" level=info msg="StartContainer for \"c958af8ac1ec165963f94fd1f1745f4af818844aa4b9cde7780b04636eaa636f\"" Dec 12 18:24:12.786222 containerd[1630]: time="2025-12-12T18:24:12.786139413Z" level=info msg="Container 36401aa1ff19530615592a018c400a145bc00f05b0df1f16c5c06c4f687106ad: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:12.786334 kubelet[2427]: E1212 18:24:12.786190 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.207.166:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-207-166&limit=500&resourceVersion=0\": dial tcp 172.234.207.166:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:24:12.788859 containerd[1630]: time="2025-12-12T18:24:12.788822724Z" level=info msg="connecting to shim c958af8ac1ec165963f94fd1f1745f4af818844aa4b9cde7780b04636eaa636f" address="unix:///run/containerd/s/2340b5bedb9e0e95135cce07d5a15705b821ec81cf237ed4b992cc75c8ee1d80" protocol=ttrpc version=3 Dec 12 18:24:12.793435 containerd[1630]: time="2025-12-12T18:24:12.793376706Z" level=info msg="CreateContainer within sandbox \"d014b91be878e96e43f1e78c1edf5cbe0bd678d7147dca16222a52d1ca89f853\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36401aa1ff19530615592a018c400a145bc00f05b0df1f16c5c06c4f687106ad\"" Dec 12 18:24:12.794637 containerd[1630]: time="2025-12-12T18:24:12.794595417Z" level=info msg="StartContainer for \"36401aa1ff19530615592a018c400a145bc00f05b0df1f16c5c06c4f687106ad\"" Dec 12 18:24:12.800384 containerd[1630]: time="2025-12-12T18:24:12.798424329Z" level=info msg="connecting to shim 36401aa1ff19530615592a018c400a145bc00f05b0df1f16c5c06c4f687106ad" address="unix:///run/containerd/s/523fdaeb41e70ba342d533a127aab96cfec6e3727d3f958c4777948889a5e593" protocol=ttrpc version=3 Dec 12 18:24:12.809000 audit: BPF prog-id=106 op=LOAD Dec 12 18:24:12.810000 audit: BPF prog-id=107 op=LOAD Dec 12 18:24:12.810000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2482 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343036653731373361386630623737616639613139373531393363 Dec 12 18:24:12.810000 audit: BPF prog-id=107 op=UNLOAD Dec 12 18:24:12.810000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2482 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343036653731373361386630623737616639613139373531393363 Dec 12 18:24:12.810000 audit: BPF prog-id=108 op=LOAD Dec 12 18:24:12.810000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2482 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343036653731373361386630623737616639613139373531393363 Dec 12 18:24:12.810000 audit: BPF prog-id=109 op=LOAD Dec 12 18:24:12.810000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2482 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343036653731373361386630623737616639613139373531393363 Dec 12 18:24:12.810000 audit: BPF prog-id=109 op=UNLOAD Dec 12 18:24:12.810000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2482 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343036653731373361386630623737616639613139373531393363 Dec 12 18:24:12.810000 audit: BPF prog-id=108 op=UNLOAD Dec 12 18:24:12.810000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2482 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343036653731373361386630623737616639613139373531393363 Dec 12 18:24:12.810000 audit: BPF prog-id=110 op=LOAD Dec 12 18:24:12.810000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2482 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734343036653731373361386630623737616639613139373531393363 Dec 12 18:24:12.832154 systemd[1]: Started cri-containerd-c958af8ac1ec165963f94fd1f1745f4af818844aa4b9cde7780b04636eaa636f.scope - libcontainer container c958af8ac1ec165963f94fd1f1745f4af818844aa4b9cde7780b04636eaa636f. Dec 12 18:24:12.844065 systemd[1]: Started cri-containerd-36401aa1ff19530615592a018c400a145bc00f05b0df1f16c5c06c4f687106ad.scope - libcontainer container 36401aa1ff19530615592a018c400a145bc00f05b0df1f16c5c06c4f687106ad. Dec 12 18:24:12.854000 audit: BPF prog-id=111 op=LOAD Dec 12 18:24:12.856000 audit: BPF prog-id=112 op=LOAD Dec 12 18:24:12.856000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2491 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353861663861633165633136353936336639346664316631373435 Dec 12 18:24:12.856000 audit: BPF prog-id=112 op=UNLOAD Dec 12 18:24:12.856000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2491 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353861663861633165633136353936336639346664316631373435 Dec 12 18:24:12.857000 audit: BPF prog-id=113 op=LOAD Dec 12 18:24:12.857000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2491 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353861663861633165633136353936336639346664316631373435 Dec 12 18:24:12.857000 audit: BPF prog-id=114 op=LOAD Dec 12 18:24:12.857000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2491 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353861663861633165633136353936336639346664316631373435 Dec 12 18:24:12.857000 audit: BPF prog-id=114 op=UNLOAD Dec 12 18:24:12.857000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2491 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353861663861633165633136353936336639346664316631373435 Dec 12 18:24:12.858000 audit: BPF prog-id=113 op=UNLOAD Dec 12 18:24:12.858000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2491 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353861663861633165633136353936336639346664316631373435 Dec 12 18:24:12.859000 audit: BPF prog-id=115 op=LOAD Dec 12 18:24:12.859000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2491 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353861663861633165633136353936336639346664316631373435 Dec 12 18:24:12.882105 kubelet[2427]: E1212 18:24:12.881924 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.207.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-207-166?timeout=10s\": dial tcp 172.234.207.166:6443: connect: connection refused" interval="1.6s" Dec 12 18:24:12.886000 audit: BPF prog-id=116 op=LOAD Dec 12 18:24:12.888000 audit: BPF prog-id=117 op=LOAD Dec 12 18:24:12.888000 audit[2633]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2514 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343031616131666631393533303631353539326130313863343030 Dec 12 18:24:12.889000 audit: BPF prog-id=117 op=UNLOAD Dec 12 18:24:12.889000 audit[2633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343031616131666631393533303631353539326130313863343030 Dec 12 18:24:12.889000 audit: BPF prog-id=118 op=LOAD Dec 12 18:24:12.889000 audit[2633]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2514 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343031616131666631393533303631353539326130313863343030 Dec 12 18:24:12.891000 audit: BPF prog-id=119 op=LOAD Dec 12 18:24:12.891000 audit[2633]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2514 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343031616131666631393533303631353539326130313863343030 Dec 12 18:24:12.891000 audit: BPF prog-id=119 op=UNLOAD Dec 12 18:24:12.891000 audit[2633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343031616131666631393533303631353539326130313863343030 Dec 12 18:24:12.891000 audit: BPF prog-id=118 op=UNLOAD Dec 12 18:24:12.891000 audit[2633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343031616131666631393533303631353539326130313863343030 Dec 12 18:24:12.891000 audit: BPF prog-id=120 op=LOAD Dec 12 18:24:12.891000 audit[2633]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2514 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:12.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343031616131666631393533303631353539326130313863343030 Dec 12 18:24:12.895294 containerd[1630]: time="2025-12-12T18:24:12.895267207Z" level=info msg="StartContainer for \"74406e7173a8f0b77af9a1975193c4b363a3274c1e4e934a52c786e6c0914eaf\" returns successfully" Dec 12 18:24:12.964722 containerd[1630]: time="2025-12-12T18:24:12.964691372Z" level=info msg="StartContainer for \"36401aa1ff19530615592a018c400a145bc00f05b0df1f16c5c06c4f687106ad\" returns successfully" Dec 12 18:24:12.976170 containerd[1630]: time="2025-12-12T18:24:12.976128778Z" level=info msg="StartContainer for \"c958af8ac1ec165963f94fd1f1745f4af818844aa4b9cde7780b04636eaa636f\" returns successfully" Dec 12 18:24:13.083078 kubelet[2427]: I1212 18:24:13.081385 2427 kubelet_node_status.go:75] "Attempting to register node" node="172-234-207-166" Dec 12 18:24:13.518522 kubelet[2427]: E1212 18:24:13.518489 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:13.518848 kubelet[2427]: E1212 18:24:13.518602 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:13.519147 kubelet[2427]: E1212 18:24:13.519122 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:13.519224 kubelet[2427]: E1212 18:24:13.519204 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:13.523544 kubelet[2427]: E1212 18:24:13.523521 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:13.523676 kubelet[2427]: E1212 18:24:13.523650 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:14.526020 kubelet[2427]: E1212 18:24:14.524938 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:14.527056 kubelet[2427]: E1212 18:24:14.526862 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:14.527752 kubelet[2427]: E1212 18:24:14.527650 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:14.527848 kubelet[2427]: E1212 18:24:14.527836 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:14.565117 kubelet[2427]: E1212 18:24:14.565024 2427 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:14.704219 kubelet[2427]: I1212 18:24:14.703227 2427 kubelet_node_status.go:78] "Successfully registered node" node="172-234-207-166" Dec 12 18:24:14.704219 kubelet[2427]: E1212 18:24:14.703260 2427 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-234-207-166\": node \"172-234-207-166\" not found" Dec 12 18:24:14.735000 kubelet[2427]: E1212 18:24:14.733959 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:14.834834 kubelet[2427]: E1212 18:24:14.834716 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:14.936382 kubelet[2427]: E1212 18:24:14.936326 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.037153 kubelet[2427]: E1212 18:24:15.037097 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.138958 kubelet[2427]: E1212 18:24:15.138655 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.239937 kubelet[2427]: E1212 18:24:15.239878 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.340700 kubelet[2427]: E1212 18:24:15.340636 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.441944 kubelet[2427]: E1212 18:24:15.441759 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.530691 kubelet[2427]: E1212 18:24:15.530635 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-207-166\" not found" node="172-234-207-166" Dec 12 18:24:15.531425 kubelet[2427]: E1212 18:24:15.530779 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:15.542392 kubelet[2427]: E1212 18:24:15.542342 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.642923 kubelet[2427]: E1212 18:24:15.642774 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.744220 kubelet[2427]: E1212 18:24:15.743588 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.845032 kubelet[2427]: E1212 18:24:15.844587 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.946148 kubelet[2427]: E1212 18:24:15.944745 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-207-166\" not found" Dec 12 18:24:15.979592 kubelet[2427]: I1212 18:24:15.979397 2427 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:15.989047 kubelet[2427]: I1212 18:24:15.988334 2427 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:15.992613 kubelet[2427]: I1212 18:24:15.992567 2427 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-207-166" Dec 12 18:24:16.186351 kubelet[2427]: I1212 18:24:16.186199 2427 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:16.194586 kubelet[2427]: E1212 18:24:16.194537 2427 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-207-166\" already exists" pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:16.194928 kubelet[2427]: E1212 18:24:16.194896 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:16.457077 kubelet[2427]: I1212 18:24:16.456799 2427 apiserver.go:52] "Watching apiserver" Dec 12 18:24:16.459849 kubelet[2427]: E1212 18:24:16.459816 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:16.479143 kubelet[2427]: I1212 18:24:16.479114 2427 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:24:16.530319 kubelet[2427]: E1212 18:24:16.530285 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:16.530930 kubelet[2427]: E1212 18:24:16.530893 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:16.580373 systemd[1]: Reload requested from client PID 2713 ('systemctl') (unit session-7.scope)... Dec 12 18:24:16.580399 systemd[1]: Reloading... Dec 12 18:24:16.757057 zram_generator::config[2775]: No configuration found. Dec 12 18:24:16.982039 systemd[1]: Reloading finished in 401 ms. Dec 12 18:24:17.026221 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:24:17.047705 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:24:17.048428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:24:17.051713 kernel: kauditd_printk_skb: 209 callbacks suppressed Dec 12 18:24:17.051766 kernel: audit: type=1131 audit(1765563857.047:401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:17.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:17.048533 systemd[1]: kubelet.service: Consumed 768ms CPU time, 123.5M memory peak. Dec 12 18:24:17.054000 audit: BPF prog-id=121 op=LOAD Dec 12 18:24:17.054476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:24:17.059048 kernel: audit: type=1334 audit(1765563857.054:402): prog-id=121 op=LOAD Dec 12 18:24:17.061439 kernel: audit: type=1334 audit(1765563857.054:403): prog-id=86 op=UNLOAD Dec 12 18:24:17.054000 audit: BPF prog-id=86 op=UNLOAD Dec 12 18:24:17.063658 kernel: audit: type=1334 audit(1765563857.058:404): prog-id=122 op=LOAD Dec 12 18:24:17.058000 audit: BPF prog-id=122 op=LOAD Dec 12 18:24:17.065855 kernel: audit: type=1334 audit(1765563857.058:405): prog-id=83 op=UNLOAD Dec 12 18:24:17.058000 audit: BPF prog-id=83 op=UNLOAD Dec 12 18:24:17.067861 kernel: audit: type=1334 audit(1765563857.060:406): prog-id=123 op=LOAD Dec 12 18:24:17.060000 audit: BPF prog-id=123 op=LOAD Dec 12 18:24:17.069928 kernel: audit: type=1334 audit(1765563857.060:407): prog-id=124 op=LOAD Dec 12 18:24:17.060000 audit: BPF prog-id=124 op=LOAD Dec 12 18:24:17.060000 audit: BPF prog-id=84 op=UNLOAD Dec 12 18:24:17.060000 audit: BPF prog-id=85 op=UNLOAD Dec 12 18:24:17.074352 kernel: audit: type=1334 audit(1765563857.060:408): prog-id=84 op=UNLOAD Dec 12 18:24:17.074405 kernel: audit: type=1334 audit(1765563857.060:409): prog-id=85 op=UNLOAD Dec 12 18:24:17.074437 kernel: audit: type=1334 audit(1765563857.063:410): prog-id=125 op=LOAD Dec 12 18:24:17.063000 audit: BPF prog-id=125 op=LOAD Dec 12 18:24:17.063000 audit: BPF prog-id=70 op=UNLOAD Dec 12 18:24:17.064000 audit: BPF prog-id=126 op=LOAD Dec 12 18:24:17.064000 audit: BPF prog-id=127 op=LOAD Dec 12 18:24:17.064000 audit: BPF prog-id=71 op=UNLOAD Dec 12 18:24:17.064000 audit: BPF prog-id=72 op=UNLOAD Dec 12 18:24:17.068000 audit: BPF prog-id=128 op=LOAD Dec 12 18:24:17.068000 audit: BPF prog-id=76 op=UNLOAD Dec 12 18:24:17.068000 audit: BPF prog-id=129 op=LOAD Dec 12 18:24:17.068000 audit: BPF prog-id=130 op=LOAD Dec 12 18:24:17.068000 audit: BPF prog-id=77 op=UNLOAD Dec 12 18:24:17.068000 audit: BPF prog-id=78 op=UNLOAD Dec 12 18:24:17.071000 audit: BPF prog-id=131 op=LOAD Dec 12 18:24:17.071000 audit: BPF prog-id=88 op=UNLOAD Dec 12 18:24:17.071000 audit: BPF prog-id=132 op=LOAD Dec 12 18:24:17.071000 audit: BPF prog-id=133 op=LOAD Dec 12 18:24:17.071000 audit: BPF prog-id=89 op=UNLOAD Dec 12 18:24:17.071000 audit: BPF prog-id=90 op=UNLOAD Dec 12 18:24:17.079000 audit: BPF prog-id=134 op=LOAD Dec 12 18:24:17.079000 audit: BPF prog-id=87 op=UNLOAD Dec 12 18:24:17.079000 audit: BPF prog-id=135 op=LOAD Dec 12 18:24:17.079000 audit: BPF prog-id=136 op=LOAD Dec 12 18:24:17.079000 audit: BPF prog-id=74 op=UNLOAD Dec 12 18:24:17.079000 audit: BPF prog-id=75 op=UNLOAD Dec 12 18:24:17.082000 audit: BPF prog-id=137 op=LOAD Dec 12 18:24:17.082000 audit: BPF prog-id=79 op=UNLOAD Dec 12 18:24:17.083000 audit: BPF prog-id=138 op=LOAD Dec 12 18:24:17.083000 audit: BPF prog-id=80 op=UNLOAD Dec 12 18:24:17.083000 audit: BPF prog-id=139 op=LOAD Dec 12 18:24:17.083000 audit: BPF prog-id=140 op=LOAD Dec 12 18:24:17.084000 audit: BPF prog-id=81 op=UNLOAD Dec 12 18:24:17.084000 audit: BPF prog-id=82 op=UNLOAD Dec 12 18:24:17.084000 audit: BPF prog-id=141 op=LOAD Dec 12 18:24:17.084000 audit: BPF prog-id=67 op=UNLOAD Dec 12 18:24:17.085000 audit: BPF prog-id=142 op=LOAD Dec 12 18:24:17.085000 audit: BPF prog-id=143 op=LOAD Dec 12 18:24:17.085000 audit: BPF prog-id=68 op=UNLOAD Dec 12 18:24:17.085000 audit: BPF prog-id=69 op=UNLOAD Dec 12 18:24:17.086000 audit: BPF prog-id=144 op=LOAD Dec 12 18:24:17.086000 audit: BPF prog-id=73 op=UNLOAD Dec 12 18:24:17.170112 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:24:17.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:17.178000 audit: BPF prog-id=122 op=UNLOAD Dec 12 18:24:17.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:17.283137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:24:17.293491 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:24:17.336849 kubelet[2814]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:24:17.336849 kubelet[2814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:24:17.337200 kubelet[2814]: I1212 18:24:17.336897 2814 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:24:17.342804 kubelet[2814]: I1212 18:24:17.342786 2814 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:24:17.342891 kubelet[2814]: I1212 18:24:17.342881 2814 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:24:17.342971 kubelet[2814]: I1212 18:24:17.342960 2814 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:24:17.343038 kubelet[2814]: I1212 18:24:17.343027 2814 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:24:17.343227 kubelet[2814]: I1212 18:24:17.343215 2814 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:24:17.345049 kubelet[2814]: I1212 18:24:17.344205 2814 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:24:17.346160 kubelet[2814]: I1212 18:24:17.346144 2814 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:24:17.350024 kubelet[2814]: I1212 18:24:17.349953 2814 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:24:17.353477 kubelet[2814]: I1212 18:24:17.353454 2814 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:24:17.353788 kubelet[2814]: I1212 18:24:17.353748 2814 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:24:17.353890 kubelet[2814]: I1212 18:24:17.353784 2814 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-207-166","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:24:17.353965 kubelet[2814]: I1212 18:24:17.353893 2814 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:24:17.353965 kubelet[2814]: I1212 18:24:17.353902 2814 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:24:17.353965 kubelet[2814]: I1212 18:24:17.353922 2814 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:24:17.355172 kubelet[2814]: I1212 18:24:17.355152 2814 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:24:17.355321 kubelet[2814]: I1212 18:24:17.355305 2814 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:24:17.355358 kubelet[2814]: I1212 18:24:17.355328 2814 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:24:17.355358 kubelet[2814]: I1212 18:24:17.355345 2814 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:24:17.355399 kubelet[2814]: I1212 18:24:17.355367 2814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:24:17.358366 kubelet[2814]: I1212 18:24:17.357870 2814 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:24:17.359481 kubelet[2814]: I1212 18:24:17.358884 2814 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:24:17.359650 kubelet[2814]: I1212 18:24:17.359630 2814 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:24:17.364230 kubelet[2814]: I1212 18:24:17.364208 2814 server.go:1262] "Started kubelet" Dec 12 18:24:17.365289 kubelet[2814]: I1212 18:24:17.365271 2814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:24:17.371104 kubelet[2814]: I1212 18:24:17.370389 2814 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:24:17.373152 kubelet[2814]: I1212 18:24:17.372732 2814 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:24:17.376360 kubelet[2814]: I1212 18:24:17.376335 2814 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:24:17.377107 kubelet[2814]: I1212 18:24:17.377085 2814 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:24:17.377311 kubelet[2814]: I1212 18:24:17.377258 2814 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:24:17.378002 kubelet[2814]: I1212 18:24:17.377892 2814 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:24:17.378002 kubelet[2814]: I1212 18:24:17.377928 2814 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:24:17.378179 kubelet[2814]: I1212 18:24:17.378166 2814 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:24:17.378407 kubelet[2814]: I1212 18:24:17.378391 2814 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:24:17.389186 kubelet[2814]: I1212 18:24:17.389157 2814 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:24:17.390539 kubelet[2814]: I1212 18:24:17.390525 2814 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:24:17.390819 kubelet[2814]: I1212 18:24:17.390596 2814 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:24:17.390819 kubelet[2814]: I1212 18:24:17.390616 2814 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:24:17.390819 kubelet[2814]: E1212 18:24:17.390653 2814 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:24:17.394535 kubelet[2814]: I1212 18:24:17.394509 2814 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:24:17.394535 kubelet[2814]: I1212 18:24:17.394528 2814 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:24:17.394607 kubelet[2814]: I1212 18:24:17.394593 2814 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:24:17.394770 kubelet[2814]: E1212 18:24:17.394755 2814 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:24:17.439766 kubelet[2814]: I1212 18:24:17.439742 2814 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:24:17.439766 kubelet[2814]: I1212 18:24:17.439757 2814 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:24:17.439766 kubelet[2814]: I1212 18:24:17.439773 2814 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:24:17.439929 kubelet[2814]: I1212 18:24:17.439873 2814 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:24:17.439929 kubelet[2814]: I1212 18:24:17.439882 2814 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:24:17.439929 kubelet[2814]: I1212 18:24:17.439896 2814 policy_none.go:49] "None policy: Start" Dec 12 18:24:17.439929 kubelet[2814]: I1212 18:24:17.439904 2814 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:24:17.439929 kubelet[2814]: I1212 18:24:17.439913 2814 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:24:17.440074 kubelet[2814]: I1212 18:24:17.440011 2814 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 18:24:17.440074 kubelet[2814]: I1212 18:24:17.440019 2814 policy_none.go:47] "Start" Dec 12 18:24:17.445374 kubelet[2814]: E1212 18:24:17.445346 2814 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:24:17.445517 kubelet[2814]: I1212 18:24:17.445487 2814 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:24:17.445517 kubelet[2814]: I1212 18:24:17.445504 2814 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:24:17.445817 kubelet[2814]: I1212 18:24:17.445798 2814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:24:17.450760 kubelet[2814]: E1212 18:24:17.449836 2814 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:24:17.492037 kubelet[2814]: I1212 18:24:17.491679 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-207-166" Dec 12 18:24:17.492037 kubelet[2814]: I1212 18:24:17.491706 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:17.492037 kubelet[2814]: I1212 18:24:17.491894 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:17.497036 kubelet[2814]: E1212 18:24:17.496927 2814 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-207-166\" already exists" pod="kube-system/kube-scheduler-172-234-207-166" Dec 12 18:24:17.497282 kubelet[2814]: E1212 18:24:17.497259 2814 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-207-166\" already exists" pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:17.497323 kubelet[2814]: E1212 18:24:17.497221 2814 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-207-166\" already exists" pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:17.551550 kubelet[2814]: I1212 18:24:17.551474 2814 kubelet_node_status.go:75] "Attempting to register node" node="172-234-207-166" Dec 12 18:24:17.562416 kubelet[2814]: I1212 18:24:17.562282 2814 kubelet_node_status.go:124] "Node was previously registered" node="172-234-207-166" Dec 12 18:24:17.563678 kubelet[2814]: I1212 18:24:17.563566 2814 kubelet_node_status.go:78] "Successfully registered node" node="172-234-207-166" Dec 12 18:24:17.578498 kubelet[2814]: I1212 18:24:17.578443 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c432ce3a4da4457142041c4dcc96530-ca-certs\") pod \"kube-apiserver-172-234-207-166\" (UID: \"0c432ce3a4da4457142041c4dcc96530\") " pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:17.578647 kubelet[2814]: I1212 18:24:17.578537 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c432ce3a4da4457142041c4dcc96530-k8s-certs\") pod \"kube-apiserver-172-234-207-166\" (UID: \"0c432ce3a4da4457142041c4dcc96530\") " pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:17.578647 kubelet[2814]: I1212 18:24:17.578570 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c432ce3a4da4457142041c4dcc96530-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-207-166\" (UID: \"0c432ce3a4da4457142041c4dcc96530\") " pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:17.578647 kubelet[2814]: I1212 18:24:17.578630 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-flexvolume-dir\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:17.578765 kubelet[2814]: I1212 18:24:17.578649 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-kubeconfig\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:17.578765 kubelet[2814]: I1212 18:24:17.578714 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:17.578813 kubelet[2814]: I1212 18:24:17.578731 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-ca-certs\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:17.578837 kubelet[2814]: I1212 18:24:17.578813 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63bab591eec7fa324a7bc4de01137d68-k8s-certs\") pod \"kube-controller-manager-172-234-207-166\" (UID: \"63bab591eec7fa324a7bc4de01137d68\") " pod="kube-system/kube-controller-manager-172-234-207-166" Dec 12 18:24:17.579691 kubelet[2814]: I1212 18:24:17.578836 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19755d1202fb10d747c26581c9ff6e8a-kubeconfig\") pod \"kube-scheduler-172-234-207-166\" (UID: \"19755d1202fb10d747c26581c9ff6e8a\") " pod="kube-system/kube-scheduler-172-234-207-166" Dec 12 18:24:17.798775 kubelet[2814]: E1212 18:24:17.797715 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:17.798775 kubelet[2814]: E1212 18:24:17.797795 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:17.798775 kubelet[2814]: E1212 18:24:17.797828 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:18.363651 kubelet[2814]: I1212 18:24:18.363398 2814 apiserver.go:52] "Watching apiserver" Dec 12 18:24:18.378088 kubelet[2814]: I1212 18:24:18.378041 2814 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:24:18.423078 kubelet[2814]: I1212 18:24:18.423041 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-207-166" Dec 12 18:24:18.425489 kubelet[2814]: I1212 18:24:18.425465 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:18.426031 kubelet[2814]: E1212 18:24:18.426011 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:18.432955 kubelet[2814]: E1212 18:24:18.432713 2814 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-207-166\" already exists" pod="kube-system/kube-scheduler-172-234-207-166" Dec 12 18:24:18.433164 kubelet[2814]: E1212 18:24:18.433148 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:18.433756 kubelet[2814]: E1212 18:24:18.433676 2814 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-207-166\" already exists" pod="kube-system/kube-apiserver-172-234-207-166" Dec 12 18:24:18.433969 kubelet[2814]: E1212 18:24:18.433912 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:18.441567 kubelet[2814]: I1212 18:24:18.441523 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-207-166" podStartSLOduration=3.441511858 podStartE2EDuration="3.441511858s" podCreationTimestamp="2025-12-12 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:24:18.435446925 +0000 UTC m=+1.137683419" watchObservedRunningTime="2025-12-12 18:24:18.441511858 +0000 UTC m=+1.143748352" Dec 12 18:24:18.441903 kubelet[2814]: I1212 18:24:18.441872 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-207-166" podStartSLOduration=3.441863049 podStartE2EDuration="3.441863049s" podCreationTimestamp="2025-12-12 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:24:18.441830689 +0000 UTC m=+1.144067213" watchObservedRunningTime="2025-12-12 18:24:18.441863049 +0000 UTC m=+1.144099543" Dec 12 18:24:18.453652 kubelet[2814]: I1212 18:24:18.453428 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-207-166" podStartSLOduration=3.453417684 podStartE2EDuration="3.453417684s" podCreationTimestamp="2025-12-12 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:24:18.447826462 +0000 UTC m=+1.150062956" watchObservedRunningTime="2025-12-12 18:24:18.453417684 +0000 UTC m=+1.155654178" Dec 12 18:24:19.428553 kubelet[2814]: E1212 18:24:19.425849 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:19.428553 kubelet[2814]: E1212 18:24:19.427107 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:20.428542 kubelet[2814]: E1212 18:24:20.427721 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:23.815388 kubelet[2814]: I1212 18:24:23.815344 2814 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:24:23.816298 kubelet[2814]: I1212 18:24:23.815773 2814 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:24:23.816526 containerd[1630]: time="2025-12-12T18:24:23.815618079Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:24:24.683474 systemd[1]: Created slice kubepods-besteffort-pod169a1bc0_80e8_4a92_bb5d_a57619eaa5cc.slice - libcontainer container kubepods-besteffort-pod169a1bc0_80e8_4a92_bb5d_a57619eaa5cc.slice. Dec 12 18:24:24.725200 kubelet[2814]: I1212 18:24:24.725135 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/169a1bc0-80e8-4a92-bb5d-a57619eaa5cc-kube-proxy\") pod \"kube-proxy-dcr5h\" (UID: \"169a1bc0-80e8-4a92-bb5d-a57619eaa5cc\") " pod="kube-system/kube-proxy-dcr5h" Dec 12 18:24:24.725200 kubelet[2814]: I1212 18:24:24.725172 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169a1bc0-80e8-4a92-bb5d-a57619eaa5cc-lib-modules\") pod \"kube-proxy-dcr5h\" (UID: \"169a1bc0-80e8-4a92-bb5d-a57619eaa5cc\") " pod="kube-system/kube-proxy-dcr5h" Dec 12 18:24:24.725382 kubelet[2814]: I1212 18:24:24.725222 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169a1bc0-80e8-4a92-bb5d-a57619eaa5cc-xtables-lock\") pod \"kube-proxy-dcr5h\" (UID: \"169a1bc0-80e8-4a92-bb5d-a57619eaa5cc\") " pod="kube-system/kube-proxy-dcr5h" Dec 12 18:24:24.725382 kubelet[2814]: I1212 18:24:24.725251 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7qrq\" (UniqueName: \"kubernetes.io/projected/169a1bc0-80e8-4a92-bb5d-a57619eaa5cc-kube-api-access-k7qrq\") pod \"kube-proxy-dcr5h\" (UID: \"169a1bc0-80e8-4a92-bb5d-a57619eaa5cc\") " pod="kube-system/kube-proxy-dcr5h" Dec 12 18:24:24.929081 kubelet[2814]: E1212 18:24:24.928294 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:24.996568 kubelet[2814]: E1212 18:24:24.995539 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:25.006383 containerd[1630]: time="2025-12-12T18:24:25.006322489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dcr5h,Uid:169a1bc0-80e8-4a92-bb5d-a57619eaa5cc,Namespace:kube-system,Attempt:0,}" Dec 12 18:24:25.018752 systemd[1]: Created slice kubepods-besteffort-pod3d83a3d1_8cc2_4865_81c7_392f1803ab3b.slice - libcontainer container kubepods-besteffort-pod3d83a3d1_8cc2_4865_81c7_392f1803ab3b.slice. Dec 12 18:24:25.029147 kubelet[2814]: I1212 18:24:25.027552 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g79c\" (UniqueName: \"kubernetes.io/projected/3d83a3d1-8cc2-4865-81c7-392f1803ab3b-kube-api-access-7g79c\") pod \"tigera-operator-65cdcdfd6d-pfml9\" (UID: \"3d83a3d1-8cc2-4865-81c7-392f1803ab3b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-pfml9" Dec 12 18:24:25.029147 kubelet[2814]: I1212 18:24:25.027590 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d83a3d1-8cc2-4865-81c7-392f1803ab3b-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-pfml9\" (UID: \"3d83a3d1-8cc2-4865-81c7-392f1803ab3b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-pfml9" Dec 12 18:24:25.035135 containerd[1630]: time="2025-12-12T18:24:25.035089187Z" level=info msg="connecting to shim 25eb6e7b333cf4ecea4d434734cc0e1f91737f5a3b07ce9ec51c523f32098876" address="unix:///run/containerd/s/1ead79db9a3471d913efd26fb5475be593d7fa11a7d3a58265176d80cf928345" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:25.066153 systemd[1]: Started cri-containerd-25eb6e7b333cf4ecea4d434734cc0e1f91737f5a3b07ce9ec51c523f32098876.scope - libcontainer container 25eb6e7b333cf4ecea4d434734cc0e1f91737f5a3b07ce9ec51c523f32098876. Dec 12 18:24:25.082068 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 12 18:24:25.082143 kernel: audit: type=1334 audit(1765563865.079:453): prog-id=145 op=LOAD Dec 12 18:24:25.079000 audit: BPF prog-id=145 op=LOAD Dec 12 18:24:25.079000 audit: BPF prog-id=146 op=LOAD Dec 12 18:24:25.086430 kernel: audit: type=1334 audit(1765563865.079:454): prog-id=146 op=LOAD Dec 12 18:24:25.079000 audit[2881]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.088574 kernel: audit: type=1300 audit(1765563865.079:454): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.096636 kernel: audit: type=1327 audit(1765563865.079:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.079000 audit: BPF prog-id=146 op=UNLOAD Dec 12 18:24:25.105616 kernel: audit: type=1334 audit(1765563865.079:455): prog-id=146 op=UNLOAD Dec 12 18:24:25.079000 audit[2881]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.107531 kernel: audit: type=1300 audit(1765563865.079:455): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.114890 kernel: audit: type=1327 audit(1765563865.079:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.121175 kernel: audit: type=1334 audit(1765563865.079:456): prog-id=147 op=LOAD Dec 12 18:24:25.079000 audit: BPF prog-id=147 op=LOAD Dec 12 18:24:25.079000 audit[2881]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.131504 kubelet[2814]: E1212 18:24:25.127166 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:25.131572 containerd[1630]: time="2025-12-12T18:24:25.125921226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dcr5h,Uid:169a1bc0-80e8-4a92-bb5d-a57619eaa5cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"25eb6e7b333cf4ecea4d434734cc0e1f91737f5a3b07ce9ec51c523f32098876\"" Dec 12 18:24:25.139787 kernel: audit: type=1300 audit(1765563865.079:456): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.139877 kernel: audit: type=1327 audit(1765563865.079:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.079000 audit: BPF prog-id=148 op=LOAD Dec 12 18:24:25.079000 audit[2881]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.079000 audit: BPF prog-id=148 op=UNLOAD Dec 12 18:24:25.079000 audit[2881]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.079000 audit: BPF prog-id=147 op=UNLOAD Dec 12 18:24:25.079000 audit[2881]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.079000 audit: BPF prog-id=149 op=LOAD Dec 12 18:24:25.079000 audit[2881]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656236653762333333636634656365613464343334373334636330 Dec 12 18:24:25.141339 containerd[1630]: time="2025-12-12T18:24:25.141310078Z" level=info msg="CreateContainer within sandbox \"25eb6e7b333cf4ecea4d434734cc0e1f91737f5a3b07ce9ec51c523f32098876\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:24:25.157828 containerd[1630]: time="2025-12-12T18:24:25.157789295Z" level=info msg="Container c3a0e9b3e6daabc22f95735cb1e763ff6d11a897169ecaf4685bd5af64d4f87e: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:25.165393 containerd[1630]: time="2025-12-12T18:24:25.165355661Z" level=info msg="CreateContainer within sandbox \"25eb6e7b333cf4ecea4d434734cc0e1f91737f5a3b07ce9ec51c523f32098876\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3a0e9b3e6daabc22f95735cb1e763ff6d11a897169ecaf4685bd5af64d4f87e\"" Dec 12 18:24:25.166817 containerd[1630]: time="2025-12-12T18:24:25.166787017Z" level=info msg="StartContainer for \"c3a0e9b3e6daabc22f95735cb1e763ff6d11a897169ecaf4685bd5af64d4f87e\"" Dec 12 18:24:25.169911 containerd[1630]: time="2025-12-12T18:24:25.169875197Z" level=info msg="connecting to shim c3a0e9b3e6daabc22f95735cb1e763ff6d11a897169ecaf4685bd5af64d4f87e" address="unix:///run/containerd/s/1ead79db9a3471d913efd26fb5475be593d7fa11a7d3a58265176d80cf928345" protocol=ttrpc version=3 Dec 12 18:24:25.197156 systemd[1]: Started cri-containerd-c3a0e9b3e6daabc22f95735cb1e763ff6d11a897169ecaf4685bd5af64d4f87e.scope - libcontainer container c3a0e9b3e6daabc22f95735cb1e763ff6d11a897169ecaf4685bd5af64d4f87e. Dec 12 18:24:25.257000 audit: BPF prog-id=150 op=LOAD Dec 12 18:24:25.257000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613065396233653664616162633232663935373335636231653736 Dec 12 18:24:25.257000 audit: BPF prog-id=151 op=LOAD Dec 12 18:24:25.257000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613065396233653664616162633232663935373335636231653736 Dec 12 18:24:25.257000 audit: BPF prog-id=151 op=UNLOAD Dec 12 18:24:25.257000 audit[2909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613065396233653664616162633232663935373335636231653736 Dec 12 18:24:25.257000 audit: BPF prog-id=150 op=UNLOAD Dec 12 18:24:25.257000 audit[2909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613065396233653664616162633232663935373335636231653736 Dec 12 18:24:25.257000 audit: BPF prog-id=152 op=LOAD Dec 12 18:24:25.257000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2869 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613065396233653664616162633232663935373335636231653736 Dec 12 18:24:25.281816 containerd[1630]: time="2025-12-12T18:24:25.281743329Z" level=info msg="StartContainer for \"c3a0e9b3e6daabc22f95735cb1e763ff6d11a897169ecaf4685bd5af64d4f87e\" returns successfully" Dec 12 18:24:25.323679 containerd[1630]: time="2025-12-12T18:24:25.323615386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-pfml9,Uid:3d83a3d1-8cc2-4865-81c7-392f1803ab3b,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:24:25.353768 containerd[1630]: time="2025-12-12T18:24:25.353135062Z" level=info msg="connecting to shim 57e17ba7cb71ad67cd7312bff755b80b64b8f33b8a4f4810b3c370927247aada" address="unix:///run/containerd/s/e793daeaca25ad7af5e70cd7bdde9d573dc8a58bd5e29dd87752acc473e52528" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:25.392294 systemd[1]: Started cri-containerd-57e17ba7cb71ad67cd7312bff755b80b64b8f33b8a4f4810b3c370927247aada.scope - libcontainer container 57e17ba7cb71ad67cd7312bff755b80b64b8f33b8a4f4810b3c370927247aada. Dec 12 18:24:25.406000 audit: BPF prog-id=153 op=LOAD Dec 12 18:24:25.406000 audit: BPF prog-id=154 op=LOAD Dec 12 18:24:25.406000 audit[2969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2956 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653137626137636237316164363763643733313262666637353562 Dec 12 18:24:25.406000 audit: BPF prog-id=154 op=UNLOAD Dec 12 18:24:25.406000 audit[2969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2956 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653137626137636237316164363763643733313262666637353562 Dec 12 18:24:25.407000 audit: BPF prog-id=155 op=LOAD Dec 12 18:24:25.407000 audit[2969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2956 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653137626137636237316164363763643733313262666637353562 Dec 12 18:24:25.407000 audit: BPF prog-id=156 op=LOAD Dec 12 18:24:25.407000 audit[2969]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2956 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653137626137636237316164363763643733313262666637353562 Dec 12 18:24:25.407000 audit: BPF prog-id=156 op=UNLOAD Dec 12 18:24:25.407000 audit[2969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2956 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653137626137636237316164363763643733313262666637353562 Dec 12 18:24:25.407000 audit: BPF prog-id=155 op=UNLOAD Dec 12 18:24:25.407000 audit[2969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2956 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653137626137636237316164363763643733313262666637353562 Dec 12 18:24:25.407000 audit: BPF prog-id=157 op=LOAD Dec 12 18:24:25.407000 audit[2969]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2956 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537653137626137636237316164363763643733313262666637353562 Dec 12 18:24:25.441355 kubelet[2814]: E1212 18:24:25.440831 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:25.441687 kubelet[2814]: E1212 18:24:25.441641 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:25.447776 containerd[1630]: time="2025-12-12T18:24:25.447736570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-pfml9,Uid:3d83a3d1-8cc2-4865-81c7-392f1803ab3b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"57e17ba7cb71ad67cd7312bff755b80b64b8f33b8a4f4810b3c370927247aada\"" Dec 12 18:24:25.449528 containerd[1630]: time="2025-12-12T18:24:25.449353395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:24:25.504000 audit[3024]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.504000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffab3c3100 a2=0 a3=7fffab3c30ec items=0 ppid=2925 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:24:25.505000 audit[3023]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.505000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe460aa410 a2=0 a3=7ffe460aa3fc items=0 ppid=2925 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:24:25.506000 audit[3025]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.506000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffafd641f0 a2=0 a3=7fffafd641dc items=0 ppid=2925 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:24:25.507000 audit[3027]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.507000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc839241b0 a2=0 a3=7ffc8392419c items=0 ppid=2925 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:24:25.508000 audit[3028]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.509000 audit[3029]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.509000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd56b45a40 a2=0 a3=7ffd56b45a2c items=0 ppid=2925 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:24:25.508000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd461472c0 a2=0 a3=7ffd461472ac items=0 ppid=2925 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.508000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:24:25.613000 audit[3032]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.613000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffc76906c0 a2=0 a3=7fffc76906ac items=0 ppid=2925 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:24:25.617000 audit[3034]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.617000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff95197790 a2=0 a3=7fff9519777c items=0 ppid=2925 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 12 18:24:25.622000 audit[3037]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.622000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffea0fb9e40 a2=0 a3=7ffea0fb9e2c items=0 ppid=2925 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 12 18:24:25.623000 audit[3038]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.623000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefb2c5f20 a2=0 a3=7ffefb2c5f0c items=0 ppid=2925 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:24:25.627000 audit[3040]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.627000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9d798930 a2=0 a3=7fff9d79891c items=0 ppid=2925 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:24:25.628000 audit[3041]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.628000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd62558480 a2=0 a3=7ffd6255846c items=0 ppid=2925 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:24:25.632000 audit[3043]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.632000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce5a052e0 a2=0 a3=7ffce5a052cc items=0 ppid=2925 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:24:25.637000 audit[3046]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.637000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe9cf71060 a2=0 a3=7ffe9cf7104c items=0 ppid=2925 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:24:25.639000 audit[3047]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.639000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb7762020 a2=0 a3=7ffeb776200c items=0 ppid=2925 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:24:25.643000 audit[3049]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.643000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5ebd2e40 a2=0 a3=7fff5ebd2e2c items=0 ppid=2925 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:24:25.644000 audit[3050]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.644000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9e5f5380 a2=0 a3=7ffe9e5f536c items=0 ppid=2925 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.644000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:24:25.648000 audit[3052]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.648000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe04238710 a2=0 a3=7ffe042386fc items=0 ppid=2925 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 12 18:24:25.653000 audit[3055]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.653000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd208cd820 a2=0 a3=7ffd208cd80c items=0 ppid=2925 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 12 18:24:25.658000 audit[3058]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.658000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc7e103b0 a2=0 a3=7fffc7e1039c items=0 ppid=2925 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.658000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 12 18:24:25.659000 audit[3059]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.659000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb2a935a0 a2=0 a3=7ffdb2a9358c items=0 ppid=2925 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:24:25.663000 audit[3061]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.663000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe279932e0 a2=0 a3=7ffe279932cc items=0 ppid=2925 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:24:25.667000 audit[3064]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.667000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffcc813390 a2=0 a3=7fffcc81337c items=0 ppid=2925 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:24:25.668000 audit[3065]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.668000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd37a12600 a2=0 a3=7ffd37a125ec items=0 ppid=2925 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:24:25.672000 audit[3067]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:24:25.672000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffdd3e4da0 a2=0 a3=7fffdd3e4d8c items=0 ppid=2925 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:24:25.699000 audit[3073]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:25.699000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe2fea1b10 a2=0 a3=7ffe2fea1afc items=0 ppid=2925 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:25.711000 audit[3073]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:25.711000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe2fea1b10 a2=0 a3=7ffe2fea1afc items=0 ppid=2925 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:25.713000 audit[3078]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.713000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffec5a00bf0 a2=0 a3=7ffec5a00bdc items=0 ppid=2925 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:24:25.717000 audit[3080]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.717000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff85e76a20 a2=0 a3=7fff85e76a0c items=0 ppid=2925 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 12 18:24:25.722000 audit[3083]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.722000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff5083fea0 a2=0 a3=7fff5083fe8c items=0 ppid=2925 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.722000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 12 18:24:25.723000 audit[3084]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.723000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc806be340 a2=0 a3=7ffc806be32c items=0 ppid=2925 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:24:25.727000 audit[3086]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.727000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe306c1830 a2=0 a3=7ffe306c181c items=0 ppid=2925 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:24:25.728000 audit[3087]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.728000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff687aadf0 a2=0 a3=7fff687aaddc items=0 ppid=2925 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.728000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:24:25.732000 audit[3089]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.732000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc204e3d60 a2=0 a3=7ffc204e3d4c items=0 ppid=2925 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.732000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:24:25.737000 audit[3092]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.737000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd83fcbb00 a2=0 a3=7ffd83fcbaec items=0 ppid=2925 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.737000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:24:25.738000 audit[3093]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.738000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6e98d1c0 a2=0 a3=7ffc6e98d1ac items=0 ppid=2925 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:24:25.742000 audit[3095]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.742000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe57ffe120 a2=0 a3=7ffe57ffe10c items=0 ppid=2925 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.742000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:24:25.743000 audit[3096]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.743000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff82aed9f0 a2=0 a3=7fff82aed9dc items=0 ppid=2925 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:24:25.747000 audit[3098]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.747000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffded706710 a2=0 a3=7ffded7066fc items=0 ppid=2925 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 12 18:24:25.752000 audit[3101]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.752000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd87f93910 a2=0 a3=7ffd87f938fc items=0 ppid=2925 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.752000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 12 18:24:25.758000 audit[3104]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.758000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdb5d45e0 a2=0 a3=7ffcdb5d45cc items=0 ppid=2925 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 12 18:24:25.761000 audit[3105]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.761000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffffaf26400 a2=0 a3=7ffffaf263ec items=0 ppid=2925 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:24:25.765000 audit[3107]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.765000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdefac9b90 a2=0 a3=7ffdefac9b7c items=0 ppid=2925 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:24:25.770000 audit[3110]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.770000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcef6dbb50 a2=0 a3=7ffcef6dbb3c items=0 ppid=2925 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:24:25.771000 audit[3111]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.771000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe663a2ac0 a2=0 a3=7ffe663a2aac items=0 ppid=2925 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:24:25.775000 audit[3113]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.775000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe790ede10 a2=0 a3=7ffe790eddfc items=0 ppid=2925 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:24:25.777000 audit[3114]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.777000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcbadf8f0 a2=0 a3=7ffdcbadf8dc items=0 ppid=2925 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:24:25.780000 audit[3116]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.780000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff52357c20 a2=0 a3=7fff52357c0c items=0 ppid=2925 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:24:25.787000 audit[3119]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:24:25.787000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe67181510 a2=0 a3=7ffe671814fc items=0 ppid=2925 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:24:25.793000 audit[3121]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:24:25.793000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc85668cc0 a2=0 a3=7ffc85668cac items=0 ppid=2925 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.793000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:25.794000 audit[3121]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:24:25.794000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc85668cc0 a2=0 a3=7ffc85668cac items=0 ppid=2925 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:25.794000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:26.245275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441213122.mount: Deactivated successfully. Dec 12 18:24:26.767858 containerd[1630]: time="2025-12-12T18:24:26.767813765Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:26.769056 containerd[1630]: time="2025-12-12T18:24:26.769030330Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 12 18:24:26.770011 containerd[1630]: time="2025-12-12T18:24:26.769678089Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:26.771220 containerd[1630]: time="2025-12-12T18:24:26.771180794Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:26.772085 containerd[1630]: time="2025-12-12T18:24:26.771740823Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.322361998s" Dec 12 18:24:26.772085 containerd[1630]: time="2025-12-12T18:24:26.771767923Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:24:26.775566 containerd[1630]: time="2025-12-12T18:24:26.775542632Z" level=info msg="CreateContainer within sandbox \"57e17ba7cb71ad67cd7312bff755b80b64b8f33b8a4f4810b3c370927247aada\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:24:26.782011 containerd[1630]: time="2025-12-12T18:24:26.781653654Z" level=info msg="Container ba8020be11622f9718f22007f6cc5bb1a2cd6a4813db07ce19ae9b4ea3557fb8: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:26.790164 containerd[1630]: time="2025-12-12T18:24:26.790134118Z" level=info msg="CreateContainer within sandbox \"57e17ba7cb71ad67cd7312bff755b80b64b8f33b8a4f4810b3c370927247aada\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ba8020be11622f9718f22007f6cc5bb1a2cd6a4813db07ce19ae9b4ea3557fb8\"" Dec 12 18:24:26.790998 containerd[1630]: time="2025-12-12T18:24:26.790902526Z" level=info msg="StartContainer for \"ba8020be11622f9718f22007f6cc5bb1a2cd6a4813db07ce19ae9b4ea3557fb8\"" Dec 12 18:24:26.792728 containerd[1630]: time="2025-12-12T18:24:26.792677271Z" level=info msg="connecting to shim ba8020be11622f9718f22007f6cc5bb1a2cd6a4813db07ce19ae9b4ea3557fb8" address="unix:///run/containerd/s/e793daeaca25ad7af5e70cd7bdde9d573dc8a58bd5e29dd87752acc473e52528" protocol=ttrpc version=3 Dec 12 18:24:26.815161 systemd[1]: Started cri-containerd-ba8020be11622f9718f22007f6cc5bb1a2cd6a4813db07ce19ae9b4ea3557fb8.scope - libcontainer container ba8020be11622f9718f22007f6cc5bb1a2cd6a4813db07ce19ae9b4ea3557fb8. Dec 12 18:24:26.826000 audit: BPF prog-id=158 op=LOAD Dec 12 18:24:26.826000 audit: BPF prog-id=159 op=LOAD Dec 12 18:24:26.826000 audit[3130]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2956 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:26.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383032306265313136323266393731386632323030376636636335 Dec 12 18:24:26.826000 audit: BPF prog-id=159 op=UNLOAD Dec 12 18:24:26.826000 audit[3130]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2956 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:26.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383032306265313136323266393731386632323030376636636335 Dec 12 18:24:26.826000 audit: BPF prog-id=160 op=LOAD Dec 12 18:24:26.826000 audit[3130]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2956 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:26.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383032306265313136323266393731386632323030376636636335 Dec 12 18:24:26.826000 audit: BPF prog-id=161 op=LOAD Dec 12 18:24:26.826000 audit[3130]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2956 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:26.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383032306265313136323266393731386632323030376636636335 Dec 12 18:24:26.826000 audit: BPF prog-id=161 op=UNLOAD Dec 12 18:24:26.826000 audit[3130]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2956 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:26.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383032306265313136323266393731386632323030376636636335 Dec 12 18:24:26.826000 audit: BPF prog-id=160 op=UNLOAD Dec 12 18:24:26.826000 audit[3130]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2956 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:26.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383032306265313136323266393731386632323030376636636335 Dec 12 18:24:26.827000 audit: BPF prog-id=162 op=LOAD Dec 12 18:24:26.827000 audit[3130]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2956 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:26.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383032306265313136323266393731386632323030376636636335 Dec 12 18:24:26.852037 containerd[1630]: time="2025-12-12T18:24:26.851959305Z" level=info msg="StartContainer for \"ba8020be11622f9718f22007f6cc5bb1a2cd6a4813db07ce19ae9b4ea3557fb8\" returns successfully" Dec 12 18:24:27.346709 kubelet[2814]: E1212 18:24:27.344936 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:27.357092 kubelet[2814]: I1212 18:24:27.357021 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dcr5h" podStartSLOduration=3.357006107 podStartE2EDuration="3.357006107s" podCreationTimestamp="2025-12-12 18:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:24:25.461334097 +0000 UTC m=+8.163570591" watchObservedRunningTime="2025-12-12 18:24:27.357006107 +0000 UTC m=+10.059242601" Dec 12 18:24:27.447744 kubelet[2814]: E1212 18:24:27.447716 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:27.468798 kubelet[2814]: I1212 18:24:27.468710 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-pfml9" podStartSLOduration=2.145081998 podStartE2EDuration="3.468683981s" podCreationTimestamp="2025-12-12 18:24:24 +0000 UTC" firstStartedPulling="2025-12-12 18:24:25.448882427 +0000 UTC m=+8.151118921" lastFinishedPulling="2025-12-12 18:24:26.77248441 +0000 UTC m=+9.474720904" observedRunningTime="2025-12-12 18:24:27.468113743 +0000 UTC m=+10.170350237" watchObservedRunningTime="2025-12-12 18:24:27.468683981 +0000 UTC m=+10.170920475" Dec 12 18:24:29.298726 kubelet[2814]: E1212 18:24:29.298141 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:31.434092 update_engine[1595]: I20251212 18:24:31.434031 1595 update_attempter.cc:509] Updating boot flags... Dec 12 18:24:32.502167 sudo[1869]: pam_unix(sudo:session): session closed for user root Dec 12 18:24:32.501000 audit[1869]: USER_END pid=1869 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:24:32.505734 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 18:24:32.505802 kernel: audit: type=1106 audit(1765563872.501:533): pid=1869 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:24:32.501000 audit[1869]: CRED_DISP pid=1869 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:24:32.520005 kernel: audit: type=1104 audit(1765563872.501:534): pid=1869 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:24:32.563907 sshd[1868]: Connection closed by 139.178.89.65 port 37588 Dec 12 18:24:32.564809 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Dec 12 18:24:32.565000 audit[1865]: USER_END pid=1865 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:24:32.575999 kernel: audit: type=1106 audit(1765563872.565:535): pid=1865 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:24:32.579419 systemd[1]: sshd@6-172.234.207.166:22-139.178.89.65:37588.service: Deactivated successfully. Dec 12 18:24:32.583594 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:24:32.583955 systemd[1]: session-7.scope: Consumed 5.663s CPU time, 229.4M memory peak. Dec 12 18:24:32.565000 audit[1865]: CRED_DISP pid=1865 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:24:32.594005 kernel: audit: type=1104 audit(1765563872.565:536): pid=1865 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:24:32.598616 systemd-logind[1594]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:24:32.599913 systemd-logind[1594]: Removed session 7. Dec 12 18:24:32.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.234.207.166:22-139.178.89.65:37588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:32.609039 kernel: audit: type=1131 audit(1765563872.578:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.234.207.166:22-139.178.89.65:37588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:24:33.283000 audit[3231]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:33.290002 kernel: audit: type=1325 audit(1765563873.283:538): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:33.283000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc6296a8e0 a2=0 a3=7ffc6296a8cc items=0 ppid=2925 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:33.301005 kernel: audit: type=1300 audit(1765563873.283:538): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc6296a8e0 a2=0 a3=7ffc6296a8cc items=0 ppid=2925 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:33.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:33.308024 kernel: audit: type=1327 audit(1765563873.283:538): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:33.301000 audit[3231]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:33.301000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6296a8e0 a2=0 a3=0 items=0 ppid=2925 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:33.314249 kernel: audit: type=1325 audit(1765563873.301:539): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:33.314297 kernel: audit: type=1300 audit(1765563873.301:539): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6296a8e0 a2=0 a3=0 items=0 ppid=2925 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:33.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:34.342000 audit[3233]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:34.342000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe5ac397a0 a2=0 a3=7ffe5ac3978c items=0 ppid=2925 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:34.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:34.346000 audit[3233]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:34.346000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5ac397a0 a2=0 a3=0 items=0 ppid=2925 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:34.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:35.419000 audit[3235]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:35.419000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc1b5a7d30 a2=0 a3=7ffc1b5a7d1c items=0 ppid=2925 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:35.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:35.423000 audit[3235]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:35.423000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1b5a7d30 a2=0 a3=0 items=0 ppid=2925 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:35.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:36.780000 audit[3237]: NETFILTER_CFG table=filter:111 family=2 entries=21 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:36.780000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd36d70db0 a2=0 a3=7ffd36d70d9c items=0 ppid=2925 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:36.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:36.808500 kubelet[2814]: I1212 18:24:36.808369 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20f74727-3fb9-44f2-bd8f-54258af1368d-tigera-ca-bundle\") pod \"calico-typha-57d48f6cb5-hk2qv\" (UID: \"20f74727-3fb9-44f2-bd8f-54258af1368d\") " pod="calico-system/calico-typha-57d48f6cb5-hk2qv" Dec 12 18:24:36.808500 kubelet[2814]: I1212 18:24:36.808412 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/20f74727-3fb9-44f2-bd8f-54258af1368d-typha-certs\") pod \"calico-typha-57d48f6cb5-hk2qv\" (UID: \"20f74727-3fb9-44f2-bd8f-54258af1368d\") " pod="calico-system/calico-typha-57d48f6cb5-hk2qv" Dec 12 18:24:36.808500 kubelet[2814]: I1212 18:24:36.808433 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hzn\" (UniqueName: \"kubernetes.io/projected/20f74727-3fb9-44f2-bd8f-54258af1368d-kube-api-access-w6hzn\") pod \"calico-typha-57d48f6cb5-hk2qv\" (UID: \"20f74727-3fb9-44f2-bd8f-54258af1368d\") " pod="calico-system/calico-typha-57d48f6cb5-hk2qv" Dec 12 18:24:36.816039 systemd[1]: Created slice kubepods-besteffort-pod20f74727_3fb9_44f2_bd8f_54258af1368d.slice - libcontainer container kubepods-besteffort-pod20f74727_3fb9_44f2_bd8f_54258af1368d.slice. Dec 12 18:24:36.825000 audit[3237]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:36.825000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd36d70db0 a2=0 a3=0 items=0 ppid=2925 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:36.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:36.958077 systemd[1]: Created slice kubepods-besteffort-podab0bccf7_2647_4682_ba4b_3dbf5e80db15.slice - libcontainer container kubepods-besteffort-podab0bccf7_2647_4682_ba4b_3dbf5e80db15.slice. Dec 12 18:24:37.009713 kubelet[2814]: I1212 18:24:37.009639 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-policysync\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.009713 kubelet[2814]: I1212 18:24:37.009692 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-var-run-calico\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.009713 kubelet[2814]: I1212 18:24:37.009714 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trlph\" (UniqueName: \"kubernetes.io/projected/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-kube-api-access-trlph\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.009918 kubelet[2814]: I1212 18:24:37.009733 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-cni-log-dir\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.009918 kubelet[2814]: I1212 18:24:37.009751 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-cni-net-dir\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.009918 kubelet[2814]: I1212 18:24:37.009772 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-flexvol-driver-host\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.009918 kubelet[2814]: I1212 18:24:37.009793 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-var-lib-calico\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.009918 kubelet[2814]: I1212 18:24:37.009813 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-lib-modules\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.010112 kubelet[2814]: I1212 18:24:37.009831 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-tigera-ca-bundle\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.010112 kubelet[2814]: I1212 18:24:37.009850 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-cni-bin-dir\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.010112 kubelet[2814]: I1212 18:24:37.009868 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-xtables-lock\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.010112 kubelet[2814]: I1212 18:24:37.009888 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab0bccf7-2647-4682-ba4b-3dbf5e80db15-node-certs\") pod \"calico-node-bhtmb\" (UID: \"ab0bccf7-2647-4682-ba4b-3dbf5e80db15\") " pod="calico-system/calico-node-bhtmb" Dec 12 18:24:37.123842 kubelet[2814]: E1212 18:24:37.123210 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:37.131407 containerd[1630]: time="2025-12-12T18:24:37.125875629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d48f6cb5-hk2qv,Uid:20f74727-3fb9-44f2-bd8f-54258af1368d,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:37.135489 kubelet[2814]: E1212 18:24:37.135449 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.135673 kubelet[2814]: W1212 18:24:37.135479 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.135707 kubelet[2814]: E1212 18:24:37.135681 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.158014 containerd[1630]: time="2025-12-12T18:24:37.156495282Z" level=info msg="connecting to shim 525e5317b9e1610559ed36fdb5162594c0998a4a93f30aca9f0dca2d073db49a" address="unix:///run/containerd/s/77716a8947c02146ad2764f4298f9aa6034a1d2c257980440ff14d9b832aef8a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:37.165685 kubelet[2814]: E1212 18:24:37.163747 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.165685 kubelet[2814]: W1212 18:24:37.163780 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.165685 kubelet[2814]: E1212 18:24:37.163831 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.177925 kubelet[2814]: E1212 18:24:37.177851 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:24:37.205285 kubelet[2814]: E1212 18:24:37.205226 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.205285 kubelet[2814]: W1212 18:24:37.205269 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.205285 kubelet[2814]: E1212 18:24:37.205298 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.206253 kubelet[2814]: E1212 18:24:37.205928 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.206253 kubelet[2814]: W1212 18:24:37.205948 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.206253 kubelet[2814]: E1212 18:24:37.205962 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.206253 kubelet[2814]: E1212 18:24:37.206227 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.206253 kubelet[2814]: W1212 18:24:37.206236 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.206253 kubelet[2814]: E1212 18:24:37.206246 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.207935 kubelet[2814]: E1212 18:24:37.207834 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.207935 kubelet[2814]: W1212 18:24:37.207857 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.207935 kubelet[2814]: E1212 18:24:37.207871 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.208194 kubelet[2814]: E1212 18:24:37.208149 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.208194 kubelet[2814]: W1212 18:24:37.208161 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.208194 kubelet[2814]: E1212 18:24:37.208171 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.209153 kubelet[2814]: E1212 18:24:37.208475 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.209153 kubelet[2814]: W1212 18:24:37.208487 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.209153 kubelet[2814]: E1212 18:24:37.208499 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.209153 kubelet[2814]: E1212 18:24:37.208783 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.209153 kubelet[2814]: W1212 18:24:37.208795 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.209153 kubelet[2814]: E1212 18:24:37.208809 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.209524 kubelet[2814]: E1212 18:24:37.209489 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.209524 kubelet[2814]: W1212 18:24:37.209510 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.209524 kubelet[2814]: E1212 18:24:37.209522 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.211707 kubelet[2814]: E1212 18:24:37.211148 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.211707 kubelet[2814]: W1212 18:24:37.211176 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.211707 kubelet[2814]: E1212 18:24:37.211226 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.211707 kubelet[2814]: E1212 18:24:37.211425 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.211707 kubelet[2814]: W1212 18:24:37.211437 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.211707 kubelet[2814]: E1212 18:24:37.211446 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.211910 kubelet[2814]: E1212 18:24:37.211887 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.211910 kubelet[2814]: W1212 18:24:37.211903 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.212004 kubelet[2814]: E1212 18:24:37.211912 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.213125 kubelet[2814]: E1212 18:24:37.212188 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.213125 kubelet[2814]: W1212 18:24:37.212207 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.213125 kubelet[2814]: E1212 18:24:37.212218 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.213125 kubelet[2814]: E1212 18:24:37.212440 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.213125 kubelet[2814]: W1212 18:24:37.212449 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.213125 kubelet[2814]: E1212 18:24:37.212457 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.213125 kubelet[2814]: E1212 18:24:37.212687 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.213125 kubelet[2814]: W1212 18:24:37.212697 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.213125 kubelet[2814]: E1212 18:24:37.212709 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.214749 kubelet[2814]: E1212 18:24:37.214379 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.214749 kubelet[2814]: W1212 18:24:37.214399 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.214749 kubelet[2814]: E1212 18:24:37.214410 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.214749 kubelet[2814]: E1212 18:24:37.214650 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.214749 kubelet[2814]: W1212 18:24:37.214660 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.214749 kubelet[2814]: E1212 18:24:37.214669 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.216006 kubelet[2814]: E1212 18:24:37.215505 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.216006 kubelet[2814]: W1212 18:24:37.215551 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.216006 kubelet[2814]: E1212 18:24:37.215563 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.217831 kubelet[2814]: E1212 18:24:37.217783 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.217831 kubelet[2814]: W1212 18:24:37.217819 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.217902 kubelet[2814]: E1212 18:24:37.217837 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.219566 kubelet[2814]: E1212 18:24:37.218333 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.219566 kubelet[2814]: W1212 18:24:37.219279 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.219566 kubelet[2814]: E1212 18:24:37.219295 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.219283 systemd[1]: Started cri-containerd-525e5317b9e1610559ed36fdb5162594c0998a4a93f30aca9f0dca2d073db49a.scope - libcontainer container 525e5317b9e1610559ed36fdb5162594c0998a4a93f30aca9f0dca2d073db49a. Dec 12 18:24:37.222170 kubelet[2814]: E1212 18:24:37.222139 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.222170 kubelet[2814]: W1212 18:24:37.222160 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.222242 kubelet[2814]: E1212 18:24:37.222176 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.223259 kubelet[2814]: E1212 18:24:37.223225 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.223259 kubelet[2814]: W1212 18:24:37.223254 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.223561 kubelet[2814]: E1212 18:24:37.223269 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.223561 kubelet[2814]: I1212 18:24:37.223294 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91aeba92-11d6-4129-85e3-7dedd0625bf3-kubelet-dir\") pod \"csi-node-driver-xnbvh\" (UID: \"91aeba92-11d6-4129-85e3-7dedd0625bf3\") " pod="calico-system/csi-node-driver-xnbvh" Dec 12 18:24:37.226157 kubelet[2814]: E1212 18:24:37.226122 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.226157 kubelet[2814]: W1212 18:24:37.226153 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.226242 kubelet[2814]: E1212 18:24:37.226170 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.226242 kubelet[2814]: I1212 18:24:37.226200 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91aeba92-11d6-4129-85e3-7dedd0625bf3-registration-dir\") pod \"csi-node-driver-xnbvh\" (UID: \"91aeba92-11d6-4129-85e3-7dedd0625bf3\") " pod="calico-system/csi-node-driver-xnbvh" Dec 12 18:24:37.227088 kubelet[2814]: E1212 18:24:37.227034 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.227124 kubelet[2814]: W1212 18:24:37.227080 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.227124 kubelet[2814]: E1212 18:24:37.227113 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.227283 kubelet[2814]: I1212 18:24:37.227245 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/91aeba92-11d6-4129-85e3-7dedd0625bf3-varrun\") pod \"csi-node-driver-xnbvh\" (UID: \"91aeba92-11d6-4129-85e3-7dedd0625bf3\") " pod="calico-system/csi-node-driver-xnbvh" Dec 12 18:24:37.228518 kubelet[2814]: E1212 18:24:37.228485 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.228571 kubelet[2814]: W1212 18:24:37.228519 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.228571 kubelet[2814]: E1212 18:24:37.228534 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.228890 kubelet[2814]: E1212 18:24:37.228865 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.228890 kubelet[2814]: W1212 18:24:37.228883 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.228954 kubelet[2814]: E1212 18:24:37.228893 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.230176 kubelet[2814]: E1212 18:24:37.230149 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.230228 kubelet[2814]: W1212 18:24:37.230189 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.230228 kubelet[2814]: E1212 18:24:37.230200 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.230508 kubelet[2814]: E1212 18:24:37.230483 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.230508 kubelet[2814]: W1212 18:24:37.230500 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.230508 kubelet[2814]: E1212 18:24:37.230508 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.230600 kubelet[2814]: I1212 18:24:37.230573 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7qw\" (UniqueName: \"kubernetes.io/projected/91aeba92-11d6-4129-85e3-7dedd0625bf3-kube-api-access-vl7qw\") pod \"csi-node-driver-xnbvh\" (UID: \"91aeba92-11d6-4129-85e3-7dedd0625bf3\") " pod="calico-system/csi-node-driver-xnbvh" Dec 12 18:24:37.230877 kubelet[2814]: E1212 18:24:37.230824 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.230903 kubelet[2814]: W1212 18:24:37.230876 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.230903 kubelet[2814]: E1212 18:24:37.230887 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.233218 kubelet[2814]: E1212 18:24:37.233172 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.233266 kubelet[2814]: W1212 18:24:37.233224 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.233266 kubelet[2814]: E1212 18:24:37.233238 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.233633 kubelet[2814]: E1212 18:24:37.233607 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.233633 kubelet[2814]: W1212 18:24:37.233625 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.233680 kubelet[2814]: E1212 18:24:37.233635 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.233942 kubelet[2814]: E1212 18:24:37.233919 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.233942 kubelet[2814]: W1212 18:24:37.233936 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.234010 kubelet[2814]: E1212 18:24:37.233946 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.234392 kubelet[2814]: E1212 18:24:37.234364 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.234392 kubelet[2814]: W1212 18:24:37.234385 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.234444 kubelet[2814]: E1212 18:24:37.234396 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.234444 kubelet[2814]: I1212 18:24:37.234422 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/91aeba92-11d6-4129-85e3-7dedd0625bf3-socket-dir\") pod \"csi-node-driver-xnbvh\" (UID: \"91aeba92-11d6-4129-85e3-7dedd0625bf3\") " pod="calico-system/csi-node-driver-xnbvh" Dec 12 18:24:37.234651 kubelet[2814]: E1212 18:24:37.234627 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.234651 kubelet[2814]: W1212 18:24:37.234645 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.234702 kubelet[2814]: E1212 18:24:37.234655 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.234959 kubelet[2814]: E1212 18:24:37.234933 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.234959 kubelet[2814]: W1212 18:24:37.234952 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.235060 kubelet[2814]: E1212 18:24:37.234961 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.235222 kubelet[2814]: E1212 18:24:37.235195 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.235222 kubelet[2814]: W1212 18:24:37.235215 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.235271 kubelet[2814]: E1212 18:24:37.235226 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.264221 kubelet[2814]: E1212 18:24:37.264172 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:37.266305 containerd[1630]: time="2025-12-12T18:24:37.266234161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhtmb,Uid:ab0bccf7-2647-4682-ba4b-3dbf5e80db15,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:37.299044 containerd[1630]: time="2025-12-12T18:24:37.298761061Z" level=info msg="connecting to shim 7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed" address="unix:///run/containerd/s/7fb0ad678c6ccc8873ebea21f8439e4ac4321f0afc0a18bd8640a22640617b24" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:37.333167 systemd[1]: Started cri-containerd-7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed.scope - libcontainer container 7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed. Dec 12 18:24:37.336608 kubelet[2814]: E1212 18:24:37.336564 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.336608 kubelet[2814]: W1212 18:24:37.336595 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.336729 kubelet[2814]: E1212 18:24:37.336620 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.337969 kubelet[2814]: E1212 18:24:37.337896 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.337969 kubelet[2814]: W1212 18:24:37.337921 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.337969 kubelet[2814]: E1212 18:24:37.337940 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.338424 kubelet[2814]: E1212 18:24:37.338392 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.338424 kubelet[2814]: W1212 18:24:37.338415 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.338424 kubelet[2814]: E1212 18:24:37.338425 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.339008 kubelet[2814]: E1212 18:24:37.338700 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.339008 kubelet[2814]: W1212 18:24:37.338720 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.339008 kubelet[2814]: E1212 18:24:37.338729 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.339279 kubelet[2814]: E1212 18:24:37.339252 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.339279 kubelet[2814]: W1212 18:24:37.339272 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.339279 kubelet[2814]: E1212 18:24:37.339281 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.339922 kubelet[2814]: E1212 18:24:37.339896 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.339922 kubelet[2814]: W1212 18:24:37.339914 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.339922 kubelet[2814]: E1212 18:24:37.339924 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.340415 kubelet[2814]: E1212 18:24:37.340388 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.340415 kubelet[2814]: W1212 18:24:37.340406 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.340415 kubelet[2814]: E1212 18:24:37.340416 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.340808 kubelet[2814]: E1212 18:24:37.340767 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.340808 kubelet[2814]: W1212 18:24:37.340786 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.340808 kubelet[2814]: E1212 18:24:37.340795 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.341157 kubelet[2814]: E1212 18:24:37.341133 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.341157 kubelet[2814]: W1212 18:24:37.341148 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.341157 kubelet[2814]: E1212 18:24:37.341157 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.341519 kubelet[2814]: E1212 18:24:37.341389 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.341519 kubelet[2814]: W1212 18:24:37.341407 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.341519 kubelet[2814]: E1212 18:24:37.341416 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.341605 kubelet[2814]: E1212 18:24:37.341577 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.341605 kubelet[2814]: W1212 18:24:37.341585 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.341605 kubelet[2814]: E1212 18:24:37.341592 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.342012 kubelet[2814]: E1212 18:24:37.341802 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.342012 kubelet[2814]: W1212 18:24:37.341825 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.342012 kubelet[2814]: E1212 18:24:37.341838 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.342152 kubelet[2814]: E1212 18:24:37.342124 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.342152 kubelet[2814]: W1212 18:24:37.342142 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.342152 kubelet[2814]: E1212 18:24:37.342153 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.342382 kubelet[2814]: E1212 18:24:37.342327 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.342382 kubelet[2814]: W1212 18:24:37.342344 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.342382 kubelet[2814]: E1212 18:24:37.342353 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.342576 kubelet[2814]: E1212 18:24:37.342515 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.342576 kubelet[2814]: W1212 18:24:37.342531 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.342576 kubelet[2814]: E1212 18:24:37.342539 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.342736 kubelet[2814]: E1212 18:24:37.342699 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.342736 kubelet[2814]: W1212 18:24:37.342716 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.342736 kubelet[2814]: E1212 18:24:37.342724 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.343120 kubelet[2814]: E1212 18:24:37.343047 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.343120 kubelet[2814]: W1212 18:24:37.343064 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.343120 kubelet[2814]: E1212 18:24:37.343074 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.343632 kubelet[2814]: E1212 18:24:37.343591 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.343632 kubelet[2814]: W1212 18:24:37.343610 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.343632 kubelet[2814]: E1212 18:24:37.343621 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.343919 kubelet[2814]: E1212 18:24:37.343881 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.343919 kubelet[2814]: W1212 18:24:37.343897 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.343919 kubelet[2814]: E1212 18:24:37.343906 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.344202 kubelet[2814]: E1212 18:24:37.344177 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.344202 kubelet[2814]: W1212 18:24:37.344197 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.344277 kubelet[2814]: E1212 18:24:37.344207 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.344609 kubelet[2814]: E1212 18:24:37.344583 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.344609 kubelet[2814]: W1212 18:24:37.344601 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.344609 kubelet[2814]: E1212 18:24:37.344610 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.344848 kubelet[2814]: E1212 18:24:37.344805 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.344848 kubelet[2814]: W1212 18:24:37.344832 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.344848 kubelet[2814]: E1212 18:24:37.344841 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.345119 kubelet[2814]: E1212 18:24:37.345093 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.345119 kubelet[2814]: W1212 18:24:37.345111 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.345119 kubelet[2814]: E1212 18:24:37.345120 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.345418 kubelet[2814]: E1212 18:24:37.345380 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.345418 kubelet[2814]: W1212 18:24:37.345398 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.345418 kubelet[2814]: E1212 18:24:37.345407 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.345669 kubelet[2814]: E1212 18:24:37.345645 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.345669 kubelet[2814]: W1212 18:24:37.345663 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.345745 kubelet[2814]: E1212 18:24:37.345672 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.372670 kubelet[2814]: E1212 18:24:37.371373 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:24:37.372670 kubelet[2814]: W1212 18:24:37.371400 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:24:37.372670 kubelet[2814]: E1212 18:24:37.371422 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:24:37.371000 audit: BPF prog-id=163 op=LOAD Dec 12 18:24:37.372000 audit: BPF prog-id=164 op=LOAD Dec 12 18:24:37.372000 audit[3346]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3333 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763343430303439646436303334633133343238656261353761656339 Dec 12 18:24:37.373000 audit: BPF prog-id=164 op=UNLOAD Dec 12 18:24:37.373000 audit[3346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763343430303439646436303334633133343238656261353761656339 Dec 12 18:24:37.373000 audit: BPF prog-id=165 op=LOAD Dec 12 18:24:37.373000 audit[3346]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3333 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763343430303439646436303334633133343238656261353761656339 Dec 12 18:24:37.375000 audit: BPF prog-id=166 op=LOAD Dec 12 18:24:37.375000 audit[3346]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3333 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763343430303439646436303334633133343238656261353761656339 Dec 12 18:24:37.376000 audit: BPF prog-id=166 op=UNLOAD Dec 12 18:24:37.376000 audit[3346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763343430303439646436303334633133343238656261353761656339 Dec 12 18:24:37.376000 audit: BPF prog-id=165 op=UNLOAD Dec 12 18:24:37.376000 audit[3346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763343430303439646436303334633133343238656261353761656339 Dec 12 18:24:37.376000 audit: BPF prog-id=167 op=LOAD Dec 12 18:24:37.376000 audit[3346]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3333 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763343430303439646436303334633133343238656261353761656339 Dec 12 18:24:37.442443 containerd[1630]: time="2025-12-12T18:24:37.440947151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhtmb,Uid:ab0bccf7-2647-4682-ba4b-3dbf5e80db15,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed\"" Dec 12 18:24:37.444377 kubelet[2814]: E1212 18:24:37.444353 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:37.447131 containerd[1630]: time="2025-12-12T18:24:37.446865903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:24:37.489000 audit: BPF prog-id=168 op=LOAD Dec 12 18:24:37.491000 audit: BPF prog-id=169 op=LOAD Dec 12 18:24:37.491000 audit[3268]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3253 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356535333137623965313631303535396564333666646235313632 Dec 12 18:24:37.491000 audit: BPF prog-id=169 op=UNLOAD Dec 12 18:24:37.491000 audit[3268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356535333137623965313631303535396564333666646235313632 Dec 12 18:24:37.491000 audit: BPF prog-id=170 op=LOAD Dec 12 18:24:37.491000 audit[3268]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3253 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356535333137623965313631303535396564333666646235313632 Dec 12 18:24:37.491000 audit: BPF prog-id=171 op=LOAD Dec 12 18:24:37.491000 audit[3268]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3253 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356535333137623965313631303535396564333666646235313632 Dec 12 18:24:37.492000 audit: BPF prog-id=171 op=UNLOAD Dec 12 18:24:37.492000 audit[3268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356535333137623965313631303535396564333666646235313632 Dec 12 18:24:37.492000 audit: BPF prog-id=170 op=UNLOAD Dec 12 18:24:37.492000 audit[3268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356535333137623965313631303535396564333666646235313632 Dec 12 18:24:37.492000 audit: BPF prog-id=172 op=LOAD Dec 12 18:24:37.492000 audit[3268]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3253 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356535333137623965313631303535396564333666646235313632 Dec 12 18:24:37.542203 containerd[1630]: time="2025-12-12T18:24:37.542121139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d48f6cb5-hk2qv,Uid:20f74727-3fb9-44f2-bd8f-54258af1368d,Namespace:calico-system,Attempt:0,} returns sandbox id \"525e5317b9e1610559ed36fdb5162594c0998a4a93f30aca9f0dca2d073db49a\"" Dec 12 18:24:37.544643 kubelet[2814]: E1212 18:24:37.544493 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:37.842000 audit[3410]: NETFILTER_CFG table=filter:113 family=2 entries=22 op=nft_register_rule pid=3410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:37.845014 kernel: kauditd_printk_skb: 63 callbacks suppressed Dec 12 18:24:37.845098 kernel: audit: type=1325 audit(1765563877.842:562): table=filter:113 family=2 entries=22 op=nft_register_rule pid=3410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:37.842000 audit[3410]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe00ac4f20 a2=0 a3=7ffe00ac4f0c items=0 ppid=2925 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.874011 kernel: audit: type=1300 audit(1765563877.842:562): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe00ac4f20 a2=0 a3=7ffe00ac4f0c items=0 ppid=2925 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:37.880466 kernel: audit: type=1327 audit(1765563877.842:562): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:37.872000 audit[3410]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:37.887638 kernel: audit: type=1325 audit(1765563877.872:563): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:37.890011 kernel: audit: type=1300 audit(1765563877.872:563): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe00ac4f20 a2=0 a3=0 items=0 ppid=2925 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.872000 audit[3410]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe00ac4f20 a2=0 a3=0 items=0 ppid=2925 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:37.897064 kernel: audit: type=1327 audit(1765563877.872:563): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:37.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:38.035796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013701529.mount: Deactivated successfully. Dec 12 18:24:38.104756 containerd[1630]: time="2025-12-12T18:24:38.104376345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:38.105377 containerd[1630]: time="2025-12-12T18:24:38.105150273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:38.105797 containerd[1630]: time="2025-12-12T18:24:38.105770004Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:38.107635 containerd[1630]: time="2025-12-12T18:24:38.107602912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:38.108524 containerd[1630]: time="2025-12-12T18:24:38.108482180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 661.575746ms" Dec 12 18:24:38.108633 containerd[1630]: time="2025-12-12T18:24:38.108610880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:24:38.111497 containerd[1630]: time="2025-12-12T18:24:38.111463137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:24:38.113349 containerd[1630]: time="2025-12-12T18:24:38.113206145Z" level=info msg="CreateContainer within sandbox \"7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:24:38.120903 containerd[1630]: time="2025-12-12T18:24:38.120867776Z" level=info msg="Container 1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:38.126205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819216211.mount: Deactivated successfully. Dec 12 18:24:38.140227 containerd[1630]: time="2025-12-12T18:24:38.140190546Z" level=info msg="CreateContainer within sandbox \"7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b\"" Dec 12 18:24:38.141086 containerd[1630]: time="2025-12-12T18:24:38.140803365Z" level=info msg="StartContainer for \"1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b\"" Dec 12 18:24:38.142687 containerd[1630]: time="2025-12-12T18:24:38.142624223Z" level=info msg="connecting to shim 1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b" address="unix:///run/containerd/s/7fb0ad678c6ccc8873ebea21f8439e4ac4321f0afc0a18bd8640a22640617b24" protocol=ttrpc version=3 Dec 12 18:24:38.165162 systemd[1]: Started cri-containerd-1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b.scope - libcontainer container 1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b. Dec 12 18:24:38.225000 audit: BPF prog-id=173 op=LOAD Dec 12 18:24:38.229099 kernel: audit: type=1334 audit(1765563878.225:564): prog-id=173 op=LOAD Dec 12 18:24:38.225000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3333 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:38.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366631303363636233333965636237373733613236393639666136 Dec 12 18:24:38.244481 kernel: audit: type=1300 audit(1765563878.225:564): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3333 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:38.244607 kernel: audit: type=1327 audit(1765563878.225:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366631303363636233333965636237373733613236393639666136 Dec 12 18:24:38.247201 kernel: audit: type=1334 audit(1765563878.225:565): prog-id=174 op=LOAD Dec 12 18:24:38.225000 audit: BPF prog-id=174 op=LOAD Dec 12 18:24:38.225000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3333 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:38.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366631303363636233333965636237373733613236393639666136 Dec 12 18:24:38.225000 audit: BPF prog-id=174 op=UNLOAD Dec 12 18:24:38.225000 audit[3419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:38.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366631303363636233333965636237373733613236393639666136 Dec 12 18:24:38.225000 audit: BPF prog-id=173 op=UNLOAD Dec 12 18:24:38.225000 audit[3419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:38.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366631303363636233333965636237373733613236393639666136 Dec 12 18:24:38.225000 audit: BPF prog-id=175 op=LOAD Dec 12 18:24:38.225000 audit[3419]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3333 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:38.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161366631303363636233333965636237373733613236393639666136 Dec 12 18:24:38.271622 containerd[1630]: time="2025-12-12T18:24:38.271565982Z" level=info msg="StartContainer for \"1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b\" returns successfully" Dec 12 18:24:38.280878 systemd[1]: cri-containerd-1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b.scope: Deactivated successfully. Dec 12 18:24:38.283000 audit: BPF prog-id=175 op=UNLOAD Dec 12 18:24:38.285128 containerd[1630]: time="2025-12-12T18:24:38.285098996Z" level=info msg="received container exit event container_id:\"1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b\" id:\"1a6f103ccb339ecb7773a26969fa6500c4274c008768256e5a5707495140129b\" pid:3432 exited_at:{seconds:1765563878 nanos:284347428}" Dec 12 18:24:38.390974 kubelet[2814]: E1212 18:24:38.390859 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:24:38.484514 kubelet[2814]: E1212 18:24:38.483757 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:39.333854 containerd[1630]: time="2025-12-12T18:24:39.333800691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:39.334828 containerd[1630]: time="2025-12-12T18:24:39.334479581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:39.335059 containerd[1630]: time="2025-12-12T18:24:39.335028770Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:39.336464 containerd[1630]: time="2025-12-12T18:24:39.336439789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:39.337592 containerd[1630]: time="2025-12-12T18:24:39.337021838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.225521161s" Dec 12 18:24:39.337592 containerd[1630]: time="2025-12-12T18:24:39.337046188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:24:39.339749 containerd[1630]: time="2025-12-12T18:24:39.339143056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:24:39.360747 containerd[1630]: time="2025-12-12T18:24:39.360705835Z" level=info msg="CreateContainer within sandbox \"525e5317b9e1610559ed36fdb5162594c0998a4a93f30aca9f0dca2d073db49a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:24:39.371121 containerd[1630]: time="2025-12-12T18:24:39.371083714Z" level=info msg="Container 086c0f03d801f3eca6a3d1b78d42a5cf0ec0742ae59a1765759f5a3ca530583e: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:39.372198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500975502.mount: Deactivated successfully. Dec 12 18:24:39.378597 containerd[1630]: time="2025-12-12T18:24:39.378560297Z" level=info msg="CreateContainer within sandbox \"525e5317b9e1610559ed36fdb5162594c0998a4a93f30aca9f0dca2d073db49a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"086c0f03d801f3eca6a3d1b78d42a5cf0ec0742ae59a1765759f5a3ca530583e\"" Dec 12 18:24:39.380478 containerd[1630]: time="2025-12-12T18:24:39.380338945Z" level=info msg="StartContainer for \"086c0f03d801f3eca6a3d1b78d42a5cf0ec0742ae59a1765759f5a3ca530583e\"" Dec 12 18:24:39.382461 containerd[1630]: time="2025-12-12T18:24:39.382436943Z" level=info msg="connecting to shim 086c0f03d801f3eca6a3d1b78d42a5cf0ec0742ae59a1765759f5a3ca530583e" address="unix:///run/containerd/s/77716a8947c02146ad2764f4298f9aa6034a1d2c257980440ff14d9b832aef8a" protocol=ttrpc version=3 Dec 12 18:24:39.407430 systemd[1]: Started cri-containerd-086c0f03d801f3eca6a3d1b78d42a5cf0ec0742ae59a1765759f5a3ca530583e.scope - libcontainer container 086c0f03d801f3eca6a3d1b78d42a5cf0ec0742ae59a1765759f5a3ca530583e. Dec 12 18:24:39.426000 audit: BPF prog-id=176 op=LOAD Dec 12 18:24:39.426000 audit: BPF prog-id=177 op=LOAD Dec 12 18:24:39.426000 audit[3474]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3253 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:39.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038366330663033643830316633656361366133643162373864343261 Dec 12 18:24:39.426000 audit: BPF prog-id=177 op=UNLOAD Dec 12 18:24:39.426000 audit[3474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:39.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038366330663033643830316633656361366133643162373864343261 Dec 12 18:24:39.427000 audit: BPF prog-id=178 op=LOAD Dec 12 18:24:39.427000 audit[3474]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3253 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:39.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038366330663033643830316633656361366133643162373864343261 Dec 12 18:24:39.427000 audit: BPF prog-id=179 op=LOAD Dec 12 18:24:39.427000 audit[3474]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3253 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:39.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038366330663033643830316633656361366133643162373864343261 Dec 12 18:24:39.427000 audit: BPF prog-id=179 op=UNLOAD Dec 12 18:24:39.427000 audit[3474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:39.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038366330663033643830316633656361366133643162373864343261 Dec 12 18:24:39.427000 audit: BPF prog-id=178 op=UNLOAD Dec 12 18:24:39.427000 audit[3474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:39.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038366330663033643830316633656361366133643162373864343261 Dec 12 18:24:39.427000 audit: BPF prog-id=180 op=LOAD Dec 12 18:24:39.427000 audit[3474]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3253 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:39.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038366330663033643830316633656361366133643162373864343261 Dec 12 18:24:39.470285 containerd[1630]: time="2025-12-12T18:24:39.470204595Z" level=info msg="StartContainer for \"086c0f03d801f3eca6a3d1b78d42a5cf0ec0742ae59a1765759f5a3ca530583e\" returns successfully" Dec 12 18:24:39.489413 kubelet[2814]: E1212 18:24:39.489361 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:40.391402 kubelet[2814]: E1212 18:24:40.391302 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:24:40.491435 kubelet[2814]: I1212 18:24:40.491182 2814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:24:40.492931 kubelet[2814]: E1212 18:24:40.492849 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:41.063399 containerd[1630]: time="2025-12-12T18:24:41.063348264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:41.064314 containerd[1630]: time="2025-12-12T18:24:41.064190853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70445002" Dec 12 18:24:41.064833 containerd[1630]: time="2025-12-12T18:24:41.064806563Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:41.066514 containerd[1630]: time="2025-12-12T18:24:41.066493561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:41.066925 containerd[1630]: time="2025-12-12T18:24:41.066898891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.727727175s" Dec 12 18:24:41.066966 containerd[1630]: time="2025-12-12T18:24:41.066926001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:24:41.071116 containerd[1630]: time="2025-12-12T18:24:41.071092317Z" level=info msg="CreateContainer within sandbox \"7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:24:41.080802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024096003.mount: Deactivated successfully. Dec 12 18:24:41.083802 containerd[1630]: time="2025-12-12T18:24:41.082915708Z" level=info msg="Container 0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:41.092064 containerd[1630]: time="2025-12-12T18:24:41.092040011Z" level=info msg="CreateContainer within sandbox \"7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b\"" Dec 12 18:24:41.093637 containerd[1630]: time="2025-12-12T18:24:41.093595670Z" level=info msg="StartContainer for \"0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b\"" Dec 12 18:24:41.100082 containerd[1630]: time="2025-12-12T18:24:41.099029055Z" level=info msg="connecting to shim 0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b" address="unix:///run/containerd/s/7fb0ad678c6ccc8873ebea21f8439e4ac4321f0afc0a18bd8640a22640617b24" protocol=ttrpc version=3 Dec 12 18:24:41.122315 systemd[1]: Started cri-containerd-0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b.scope - libcontainer container 0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b. Dec 12 18:24:41.173000 audit: BPF prog-id=181 op=LOAD Dec 12 18:24:41.173000 audit[3518]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3333 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:41.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062383833336434306636646232666464383762613762633739663364 Dec 12 18:24:41.173000 audit: BPF prog-id=182 op=LOAD Dec 12 18:24:41.173000 audit[3518]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3333 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:41.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062383833336434306636646232666464383762613762633739663364 Dec 12 18:24:41.173000 audit: BPF prog-id=182 op=UNLOAD Dec 12 18:24:41.173000 audit[3518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:41.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062383833336434306636646232666464383762613762633739663364 Dec 12 18:24:41.173000 audit: BPF prog-id=181 op=UNLOAD Dec 12 18:24:41.173000 audit[3518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:41.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062383833336434306636646232666464383762613762633739663364 Dec 12 18:24:41.174000 audit: BPF prog-id=183 op=LOAD Dec 12 18:24:41.174000 audit[3518]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3333 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:41.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062383833336434306636646232666464383762613762633739663364 Dec 12 18:24:41.198828 containerd[1630]: time="2025-12-12T18:24:41.198789934Z" level=info msg="StartContainer for \"0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b\" returns successfully" Dec 12 18:24:41.506189 kubelet[2814]: E1212 18:24:41.504948 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:41.522832 kubelet[2814]: I1212 18:24:41.522470 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57d48f6cb5-hk2qv" podStartSLOduration=3.730528658 podStartE2EDuration="5.52245865s" podCreationTimestamp="2025-12-12 18:24:36 +0000 UTC" firstStartedPulling="2025-12-12 18:24:37.545827325 +0000 UTC m=+20.248063819" lastFinishedPulling="2025-12-12 18:24:39.337757317 +0000 UTC m=+22.039993811" observedRunningTime="2025-12-12 18:24:39.508781167 +0000 UTC m=+22.211017661" watchObservedRunningTime="2025-12-12 18:24:41.52245865 +0000 UTC m=+24.224695144" Dec 12 18:24:41.685566 containerd[1630]: time="2025-12-12T18:24:41.685497357Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:24:41.688884 systemd[1]: cri-containerd-0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b.scope: Deactivated successfully. Dec 12 18:24:41.689470 systemd[1]: cri-containerd-0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b.scope: Consumed 497ms CPU time, 195.7M memory peak, 171.3M written to disk. Dec 12 18:24:41.690060 containerd[1630]: time="2025-12-12T18:24:41.690037894Z" level=info msg="received container exit event container_id:\"0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b\" id:\"0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b\" pid:3531 exited_at:{seconds:1765563881 nanos:689404314}" Dec 12 18:24:41.691000 audit: BPF prog-id=183 op=UNLOAD Dec 12 18:24:41.712882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b8833d40f6db2fdd87ba7bc79f3de81a3a60b2f591b993f6426ed851c929e8b-rootfs.mount: Deactivated successfully. Dec 12 18:24:41.763915 kubelet[2814]: I1212 18:24:41.763331 2814 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 18:24:41.798816 systemd[1]: Created slice kubepods-burstable-pod4664a851_47ee_4ce8_a989_b4c3338ff2f7.slice - libcontainer container kubepods-burstable-pod4664a851_47ee_4ce8_a989_b4c3338ff2f7.slice. Dec 12 18:24:41.812744 systemd[1]: Created slice kubepods-besteffort-podd4234e62_fee9_4e5f_91a6_36421f56e51b.slice - libcontainer container kubepods-besteffort-podd4234e62_fee9_4e5f_91a6_36421f56e51b.slice. Dec 12 18:24:41.824910 systemd[1]: Created slice kubepods-besteffort-podc1ce557f_fee1_488f_bf03_0d09f4a1964c.slice - libcontainer container kubepods-besteffort-podc1ce557f_fee1_488f_bf03_0d09f4a1964c.slice. Dec 12 18:24:41.834400 systemd[1]: Created slice kubepods-burstable-pod54fb9877_a362_4f5b_ba54_73070bbdb7a5.slice - libcontainer container kubepods-burstable-pod54fb9877_a362_4f5b_ba54_73070bbdb7a5.slice. Dec 12 18:24:41.841812 systemd[1]: Created slice kubepods-besteffort-pod3f177278_7ed2_426c_a0d0_27da05aa7f69.slice - libcontainer container kubepods-besteffort-pod3f177278_7ed2_426c_a0d0_27da05aa7f69.slice. Dec 12 18:24:41.852131 systemd[1]: Created slice kubepods-besteffort-poda8fbc410_f738_4c23_8813_68d1a7480f15.slice - libcontainer container kubepods-besteffort-poda8fbc410_f738_4c23_8813_68d1a7480f15.slice. Dec 12 18:24:41.858501 systemd[1]: Created slice kubepods-besteffort-pod1b812c88_50ce_4177_8552_96cf1344574a.slice - libcontainer container kubepods-besteffort-pod1b812c88_50ce_4177_8552_96cf1344574a.slice. Dec 12 18:24:41.877604 kubelet[2814]: I1212 18:24:41.877557 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1ce557f-fee1-488f-bf03-0d09f4a1964c-calico-apiserver-certs\") pod \"calico-apiserver-5d49f44685-gclmc\" (UID: \"c1ce557f-fee1-488f-bf03-0d09f4a1964c\") " pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" Dec 12 18:24:41.878135 kubelet[2814]: I1212 18:24:41.878083 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1b812c88-50ce-4177-8552-96cf1344574a-whisker-backend-key-pair\") pod \"whisker-69df7f9dd4-7g8qh\" (UID: \"1b812c88-50ce-4177-8552-96cf1344574a\") " pod="calico-system/whisker-69df7f9dd4-7g8qh" Dec 12 18:24:41.878135 kubelet[2814]: I1212 18:24:41.878107 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t42vf\" (UniqueName: \"kubernetes.io/projected/c1ce557f-fee1-488f-bf03-0d09f4a1964c-kube-api-access-t42vf\") pod \"calico-apiserver-5d49f44685-gclmc\" (UID: \"c1ce557f-fee1-488f-bf03-0d09f4a1964c\") " pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" Dec 12 18:24:41.878318 kubelet[2814]: I1212 18:24:41.878267 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fbc410-f738-4c23-8813-68d1a7480f15-config\") pod \"goldmane-7c778bb748-dl6p8\" (UID: \"a8fbc410-f738-4c23-8813-68d1a7480f15\") " pod="calico-system/goldmane-7c778bb748-dl6p8" Dec 12 18:24:41.878318 kubelet[2814]: I1212 18:24:41.878289 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8fbc410-f738-4c23-8813-68d1a7480f15-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-dl6p8\" (UID: \"a8fbc410-f738-4c23-8813-68d1a7480f15\") " pod="calico-system/goldmane-7c778bb748-dl6p8" Dec 12 18:24:41.878435 kubelet[2814]: I1212 18:24:41.878418 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a8fbc410-f738-4c23-8813-68d1a7480f15-goldmane-key-pair\") pod \"goldmane-7c778bb748-dl6p8\" (UID: \"a8fbc410-f738-4c23-8813-68d1a7480f15\") " pod="calico-system/goldmane-7c778bb748-dl6p8" Dec 12 18:24:41.878589 kubelet[2814]: I1212 18:24:41.878564 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhn5w\" (UniqueName: \"kubernetes.io/projected/3f177278-7ed2-426c-a0d0-27da05aa7f69-kube-api-access-fhn5w\") pod \"calico-apiserver-5d49f44685-wf2rc\" (UID: \"3f177278-7ed2-426c-a0d0-27da05aa7f69\") " pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" Dec 12 18:24:41.878765 kubelet[2814]: I1212 18:24:41.878698 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mg8j\" (UniqueName: \"kubernetes.io/projected/a8fbc410-f738-4c23-8813-68d1a7480f15-kube-api-access-7mg8j\") pod \"goldmane-7c778bb748-dl6p8\" (UID: \"a8fbc410-f738-4c23-8813-68d1a7480f15\") " pod="calico-system/goldmane-7c778bb748-dl6p8" Dec 12 18:24:41.878917 kubelet[2814]: I1212 18:24:41.878719 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4664a851-47ee-4ce8-a989-b4c3338ff2f7-config-volume\") pod \"coredns-66bc5c9577-dt25m\" (UID: \"4664a851-47ee-4ce8-a989-b4c3338ff2f7\") " pod="kube-system/coredns-66bc5c9577-dt25m" Dec 12 18:24:41.879075 kubelet[2814]: I1212 18:24:41.879030 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3f177278-7ed2-426c-a0d0-27da05aa7f69-calico-apiserver-certs\") pod \"calico-apiserver-5d49f44685-wf2rc\" (UID: \"3f177278-7ed2-426c-a0d0-27da05aa7f69\") " pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" Dec 12 18:24:41.879075 kubelet[2814]: I1212 18:24:41.879053 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4234e62-fee9-4e5f-91a6-36421f56e51b-tigera-ca-bundle\") pod \"calico-kube-controllers-57947d7c9d-zl2bt\" (UID: \"d4234e62-fee9-4e5f-91a6-36421f56e51b\") " pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" Dec 12 18:24:41.879302 kubelet[2814]: I1212 18:24:41.879253 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drv56\" (UniqueName: \"kubernetes.io/projected/d4234e62-fee9-4e5f-91a6-36421f56e51b-kube-api-access-drv56\") pod \"calico-kube-controllers-57947d7c9d-zl2bt\" (UID: \"d4234e62-fee9-4e5f-91a6-36421f56e51b\") " pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" Dec 12 18:24:41.879302 kubelet[2814]: I1212 18:24:41.879273 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b812c88-50ce-4177-8552-96cf1344574a-whisker-ca-bundle\") pod \"whisker-69df7f9dd4-7g8qh\" (UID: \"1b812c88-50ce-4177-8552-96cf1344574a\") " pod="calico-system/whisker-69df7f9dd4-7g8qh" Dec 12 18:24:41.879436 kubelet[2814]: I1212 18:24:41.879422 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf8vg\" (UniqueName: \"kubernetes.io/projected/1b812c88-50ce-4177-8552-96cf1344574a-kube-api-access-hf8vg\") pod \"whisker-69df7f9dd4-7g8qh\" (UID: \"1b812c88-50ce-4177-8552-96cf1344574a\") " pod="calico-system/whisker-69df7f9dd4-7g8qh" Dec 12 18:24:41.879622 kubelet[2814]: I1212 18:24:41.879565 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54fb9877-a362-4f5b-ba54-73070bbdb7a5-config-volume\") pod \"coredns-66bc5c9577-pxm79\" (UID: \"54fb9877-a362-4f5b-ba54-73070bbdb7a5\") " pod="kube-system/coredns-66bc5c9577-pxm79" Dec 12 18:24:41.879622 kubelet[2814]: I1212 18:24:41.879586 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8xw5\" (UniqueName: \"kubernetes.io/projected/54fb9877-a362-4f5b-ba54-73070bbdb7a5-kube-api-access-f8xw5\") pod \"coredns-66bc5c9577-pxm79\" (UID: \"54fb9877-a362-4f5b-ba54-73070bbdb7a5\") " pod="kube-system/coredns-66bc5c9577-pxm79" Dec 12 18:24:41.879622 kubelet[2814]: I1212 18:24:41.879600 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7x2r\" (UniqueName: \"kubernetes.io/projected/4664a851-47ee-4ce8-a989-b4c3338ff2f7-kube-api-access-v7x2r\") pod \"coredns-66bc5c9577-dt25m\" (UID: \"4664a851-47ee-4ce8-a989-b4c3338ff2f7\") " pod="kube-system/coredns-66bc5c9577-dt25m" Dec 12 18:24:42.111547 kubelet[2814]: E1212 18:24:42.111243 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:42.112560 containerd[1630]: time="2025-12-12T18:24:42.112527039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dt25m,Uid:4664a851-47ee-4ce8-a989-b4c3338ff2f7,Namespace:kube-system,Attempt:0,}" Dec 12 18:24:42.120154 containerd[1630]: time="2025-12-12T18:24:42.120126674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57947d7c9d-zl2bt,Uid:d4234e62-fee9-4e5f-91a6-36421f56e51b,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:42.134338 containerd[1630]: time="2025-12-12T18:24:42.134231323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49f44685-gclmc,Uid:c1ce557f-fee1-488f-bf03-0d09f4a1964c,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:24:42.145071 kubelet[2814]: E1212 18:24:42.144611 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:42.147256 containerd[1630]: time="2025-12-12T18:24:42.146884324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pxm79,Uid:54fb9877-a362-4f5b-ba54-73070bbdb7a5,Namespace:kube-system,Attempt:0,}" Dec 12 18:24:42.150047 containerd[1630]: time="2025-12-12T18:24:42.149657741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49f44685-wf2rc,Uid:3f177278-7ed2-426c-a0d0-27da05aa7f69,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:24:42.165926 containerd[1630]: time="2025-12-12T18:24:42.165900939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dl6p8,Uid:a8fbc410-f738-4c23-8813-68d1a7480f15,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:42.174428 containerd[1630]: time="2025-12-12T18:24:42.174380914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69df7f9dd4-7g8qh,Uid:1b812c88-50ce-4177-8552-96cf1344574a,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:42.304886 containerd[1630]: time="2025-12-12T18:24:42.304675778Z" level=error msg="Failed to destroy network for sandbox \"523e3eb8976ed63415172b419b7f250e23fb51fc0c84f9e6f465e2f9d6abfb54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.307886 containerd[1630]: time="2025-12-12T18:24:42.307412106Z" level=error msg="Failed to destroy network for sandbox \"6fbfbe0bd2961d28533a554915878d9341455a62d632cb3697c0cff6a7ce157a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.307886 containerd[1630]: time="2025-12-12T18:24:42.307554936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dt25m,Uid:4664a851-47ee-4ce8-a989-b4c3338ff2f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"523e3eb8976ed63415172b419b7f250e23fb51fc0c84f9e6f465e2f9d6abfb54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.308108 kubelet[2814]: E1212 18:24:42.307826 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"523e3eb8976ed63415172b419b7f250e23fb51fc0c84f9e6f465e2f9d6abfb54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.308381 kubelet[2814]: E1212 18:24:42.307939 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"523e3eb8976ed63415172b419b7f250e23fb51fc0c84f9e6f465e2f9d6abfb54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dt25m" Dec 12 18:24:42.308562 kubelet[2814]: E1212 18:24:42.308543 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"523e3eb8976ed63415172b419b7f250e23fb51fc0c84f9e6f465e2f9d6abfb54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dt25m" Dec 12 18:24:42.308805 kubelet[2814]: E1212 18:24:42.308762 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dt25m_kube-system(4664a851-47ee-4ce8-a989-b4c3338ff2f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dt25m_kube-system(4664a851-47ee-4ce8-a989-b4c3338ff2f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"523e3eb8976ed63415172b419b7f250e23fb51fc0c84f9e6f465e2f9d6abfb54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dt25m" podUID="4664a851-47ee-4ce8-a989-b4c3338ff2f7" Dec 12 18:24:42.313521 containerd[1630]: time="2025-12-12T18:24:42.313475281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57947d7c9d-zl2bt,Uid:d4234e62-fee9-4e5f-91a6-36421f56e51b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fbfbe0bd2961d28533a554915878d9341455a62d632cb3697c0cff6a7ce157a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.313708 kubelet[2814]: E1212 18:24:42.313658 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fbfbe0bd2961d28533a554915878d9341455a62d632cb3697c0cff6a7ce157a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.313752 kubelet[2814]: E1212 18:24:42.313705 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fbfbe0bd2961d28533a554915878d9341455a62d632cb3697c0cff6a7ce157a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" Dec 12 18:24:42.313752 kubelet[2814]: E1212 18:24:42.313723 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fbfbe0bd2961d28533a554915878d9341455a62d632cb3697c0cff6a7ce157a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" Dec 12 18:24:42.313808 kubelet[2814]: E1212 18:24:42.313763 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57947d7c9d-zl2bt_calico-system(d4234e62-fee9-4e5f-91a6-36421f56e51b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57947d7c9d-zl2bt_calico-system(d4234e62-fee9-4e5f-91a6-36421f56e51b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fbfbe0bd2961d28533a554915878d9341455a62d632cb3697c0cff6a7ce157a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:24:42.332334 containerd[1630]: time="2025-12-12T18:24:42.332218357Z" level=error msg="Failed to destroy network for sandbox \"67312621eaea2b5126f4d2f1c1dedad63543d97a0ecda3ebcd6cc982d4636160\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.333730 containerd[1630]: time="2025-12-12T18:24:42.333693696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pxm79,Uid:54fb9877-a362-4f5b-ba54-73070bbdb7a5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67312621eaea2b5126f4d2f1c1dedad63543d97a0ecda3ebcd6cc982d4636160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.335408 kubelet[2814]: E1212 18:24:42.334125 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67312621eaea2b5126f4d2f1c1dedad63543d97a0ecda3ebcd6cc982d4636160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.335408 kubelet[2814]: E1212 18:24:42.334186 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67312621eaea2b5126f4d2f1c1dedad63543d97a0ecda3ebcd6cc982d4636160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pxm79" Dec 12 18:24:42.335408 kubelet[2814]: E1212 18:24:42.334202 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67312621eaea2b5126f4d2f1c1dedad63543d97a0ecda3ebcd6cc982d4636160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pxm79" Dec 12 18:24:42.335521 kubelet[2814]: E1212 18:24:42.335351 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pxm79_kube-system(54fb9877-a362-4f5b-ba54-73070bbdb7a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pxm79_kube-system(54fb9877-a362-4f5b-ba54-73070bbdb7a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67312621eaea2b5126f4d2f1c1dedad63543d97a0ecda3ebcd6cc982d4636160\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pxm79" podUID="54fb9877-a362-4f5b-ba54-73070bbdb7a5" Dec 12 18:24:42.337254 containerd[1630]: time="2025-12-12T18:24:42.337088244Z" level=error msg="Failed to destroy network for sandbox \"49bb1f4d6c213611ea80a5a245f775dea777c896d7caf53eb2eb4abaf961083c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.339399 containerd[1630]: time="2025-12-12T18:24:42.339358672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49f44685-gclmc,Uid:c1ce557f-fee1-488f-bf03-0d09f4a1964c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bb1f4d6c213611ea80a5a245f775dea777c896d7caf53eb2eb4abaf961083c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.340083 kubelet[2814]: E1212 18:24:42.339675 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bb1f4d6c213611ea80a5a245f775dea777c896d7caf53eb2eb4abaf961083c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.340083 kubelet[2814]: E1212 18:24:42.339721 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bb1f4d6c213611ea80a5a245f775dea777c896d7caf53eb2eb4abaf961083c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" Dec 12 18:24:42.340083 kubelet[2814]: E1212 18:24:42.339738 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bb1f4d6c213611ea80a5a245f775dea777c896d7caf53eb2eb4abaf961083c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" Dec 12 18:24:42.340193 kubelet[2814]: E1212 18:24:42.339779 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d49f44685-gclmc_calico-apiserver(c1ce557f-fee1-488f-bf03-0d09f4a1964c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d49f44685-gclmc_calico-apiserver(c1ce557f-fee1-488f-bf03-0d09f4a1964c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49bb1f4d6c213611ea80a5a245f775dea777c896d7caf53eb2eb4abaf961083c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:24:42.352644 containerd[1630]: time="2025-12-12T18:24:42.352605783Z" level=error msg="Failed to destroy network for sandbox \"1e2325651931298c23104f766f3404ce72b5ed76efd9a0e2343e09cf813092d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.354441 containerd[1630]: time="2025-12-12T18:24:42.354404782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49f44685-wf2rc,Uid:3f177278-7ed2-426c-a0d0-27da05aa7f69,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2325651931298c23104f766f3404ce72b5ed76efd9a0e2343e09cf813092d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.354734 kubelet[2814]: E1212 18:24:42.354707 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2325651931298c23104f766f3404ce72b5ed76efd9a0e2343e09cf813092d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.355042 kubelet[2814]: E1212 18:24:42.354821 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2325651931298c23104f766f3404ce72b5ed76efd9a0e2343e09cf813092d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" Dec 12 18:24:42.355042 kubelet[2814]: E1212 18:24:42.354843 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2325651931298c23104f766f3404ce72b5ed76efd9a0e2343e09cf813092d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" Dec 12 18:24:42.355042 kubelet[2814]: E1212 18:24:42.354895 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d49f44685-wf2rc_calico-apiserver(3f177278-7ed2-426c-a0d0-27da05aa7f69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d49f44685-wf2rc_calico-apiserver(3f177278-7ed2-426c-a0d0-27da05aa7f69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e2325651931298c23104f766f3404ce72b5ed76efd9a0e2343e09cf813092d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:24:42.366271 containerd[1630]: time="2025-12-12T18:24:42.365832473Z" level=error msg="Failed to destroy network for sandbox \"f787b0ba8e339ccfcdfa69760b0f54873d8844aee9b34cfca34cd284c55aa862\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.366271 containerd[1630]: time="2025-12-12T18:24:42.365949493Z" level=error msg="Failed to destroy network for sandbox \"ca461b5800ad19a51ff84c719b0e4bd0362e408bbad31fdd8f4d37bce5131557\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.367930 containerd[1630]: time="2025-12-12T18:24:42.367821141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dl6p8,Uid:a8fbc410-f738-4c23-8813-68d1a7480f15,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f787b0ba8e339ccfcdfa69760b0f54873d8844aee9b34cfca34cd284c55aa862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.368132 kubelet[2814]: E1212 18:24:42.367965 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f787b0ba8e339ccfcdfa69760b0f54873d8844aee9b34cfca34cd284c55aa862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.368132 kubelet[2814]: E1212 18:24:42.368039 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f787b0ba8e339ccfcdfa69760b0f54873d8844aee9b34cfca34cd284c55aa862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dl6p8" Dec 12 18:24:42.368132 kubelet[2814]: E1212 18:24:42.368061 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f787b0ba8e339ccfcdfa69760b0f54873d8844aee9b34cfca34cd284c55aa862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dl6p8" Dec 12 18:24:42.368221 kubelet[2814]: E1212 18:24:42.368134 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-dl6p8_calico-system(a8fbc410-f738-4c23-8813-68d1a7480f15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-dl6p8_calico-system(a8fbc410-f738-4c23-8813-68d1a7480f15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f787b0ba8e339ccfcdfa69760b0f54873d8844aee9b34cfca34cd284c55aa862\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:24:42.369415 containerd[1630]: time="2025-12-12T18:24:42.369365140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69df7f9dd4-7g8qh,Uid:1b812c88-50ce-4177-8552-96cf1344574a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca461b5800ad19a51ff84c719b0e4bd0362e408bbad31fdd8f4d37bce5131557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.369596 kubelet[2814]: E1212 18:24:42.369554 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca461b5800ad19a51ff84c719b0e4bd0362e408bbad31fdd8f4d37bce5131557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.369724 kubelet[2814]: E1212 18:24:42.369601 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca461b5800ad19a51ff84c719b0e4bd0362e408bbad31fdd8f4d37bce5131557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69df7f9dd4-7g8qh" Dec 12 18:24:42.369724 kubelet[2814]: E1212 18:24:42.369619 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca461b5800ad19a51ff84c719b0e4bd0362e408bbad31fdd8f4d37bce5131557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69df7f9dd4-7g8qh" Dec 12 18:24:42.369724 kubelet[2814]: E1212 18:24:42.369660 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69df7f9dd4-7g8qh_calico-system(1b812c88-50ce-4177-8552-96cf1344574a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69df7f9dd4-7g8qh_calico-system(1b812c88-50ce-4177-8552-96cf1344574a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca461b5800ad19a51ff84c719b0e4bd0362e408bbad31fdd8f4d37bce5131557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69df7f9dd4-7g8qh" podUID="1b812c88-50ce-4177-8552-96cf1344574a" Dec 12 18:24:42.399438 systemd[1]: Created slice kubepods-besteffort-pod91aeba92_11d6_4129_85e3_7dedd0625bf3.slice - libcontainer container kubepods-besteffort-pod91aeba92_11d6_4129_85e3_7dedd0625bf3.slice. Dec 12 18:24:42.403446 containerd[1630]: time="2025-12-12T18:24:42.403418716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xnbvh,Uid:91aeba92-11d6-4129-85e3-7dedd0625bf3,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:42.455289 containerd[1630]: time="2025-12-12T18:24:42.455248038Z" level=error msg="Failed to destroy network for sandbox \"3a946bbd29bec3fc2a3c7f6a850416216f3a0eeda0afa516607aa979dbd7bd80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.456891 containerd[1630]: time="2025-12-12T18:24:42.456845497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xnbvh,Uid:91aeba92-11d6-4129-85e3-7dedd0625bf3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a946bbd29bec3fc2a3c7f6a850416216f3a0eeda0afa516607aa979dbd7bd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.457193 kubelet[2814]: E1212 18:24:42.457163 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a946bbd29bec3fc2a3c7f6a850416216f3a0eeda0afa516607aa979dbd7bd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:24:42.457245 kubelet[2814]: E1212 18:24:42.457209 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a946bbd29bec3fc2a3c7f6a850416216f3a0eeda0afa516607aa979dbd7bd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xnbvh" Dec 12 18:24:42.457245 kubelet[2814]: E1212 18:24:42.457233 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a946bbd29bec3fc2a3c7f6a850416216f3a0eeda0afa516607aa979dbd7bd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xnbvh" Dec 12 18:24:42.457296 kubelet[2814]: E1212 18:24:42.457281 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a946bbd29bec3fc2a3c7f6a850416216f3a0eeda0afa516607aa979dbd7bd80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:24:42.510165 kubelet[2814]: E1212 18:24:42.510125 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:42.512768 containerd[1630]: time="2025-12-12T18:24:42.512680466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:24:43.080005 systemd[1]: run-netns-cni\x2d0f7b23bc\x2d70a0\x2def37\x2d8109\x2df65988cb92d9.mount: Deactivated successfully. Dec 12 18:24:43.080398 systemd[1]: run-netns-cni\x2d36b67c5c\x2d03b9\x2db703\x2d9cc7\x2d1db8628b0027.mount: Deactivated successfully. Dec 12 18:24:43.080626 systemd[1]: run-netns-cni\x2db85b946b\x2d561f\x2da2ec\x2d80ec\x2d2e81abc669bf.mount: Deactivated successfully. Dec 12 18:24:43.080819 systemd[1]: run-netns-cni\x2df0c8a6df\x2dcb5f\x2db47e\x2d1aaf\x2d0f1fec58e16f.mount: Deactivated successfully. Dec 12 18:24:45.938744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192489544.mount: Deactivated successfully. Dec 12 18:24:45.961831 containerd[1630]: time="2025-12-12T18:24:45.961764314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:45.962779 containerd[1630]: time="2025-12-12T18:24:45.962654423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 12 18:24:45.963320 containerd[1630]: time="2025-12-12T18:24:45.963285773Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:45.966667 containerd[1630]: time="2025-12-12T18:24:45.966635031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:24:45.967440 containerd[1630]: time="2025-12-12T18:24:45.967362441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.454412875s" Dec 12 18:24:45.967440 containerd[1630]: time="2025-12-12T18:24:45.967438961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:24:45.988232 containerd[1630]: time="2025-12-12T18:24:45.988194940Z" level=info msg="CreateContainer within sandbox \"7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:24:45.996331 containerd[1630]: time="2025-12-12T18:24:45.996301416Z" level=info msg="Container c1f876718e20520fab64927760795c6e0c23dab2f79c374e82f5c93aeb52e1ce: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:45.999640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163750684.mount: Deactivated successfully. Dec 12 18:24:46.005096 containerd[1630]: time="2025-12-12T18:24:46.005057642Z" level=info msg="CreateContainer within sandbox \"7c440049dd6034c13428eba57aec9593eb4b1fbfe6e0c1a07e681e545bb8b2ed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c1f876718e20520fab64927760795c6e0c23dab2f79c374e82f5c93aeb52e1ce\"" Dec 12 18:24:46.007226 containerd[1630]: time="2025-12-12T18:24:46.006172991Z" level=info msg="StartContainer for \"c1f876718e20520fab64927760795c6e0c23dab2f79c374e82f5c93aeb52e1ce\"" Dec 12 18:24:46.009087 containerd[1630]: time="2025-12-12T18:24:46.009065980Z" level=info msg="connecting to shim c1f876718e20520fab64927760795c6e0c23dab2f79c374e82f5c93aeb52e1ce" address="unix:///run/containerd/s/7fb0ad678c6ccc8873ebea21f8439e4ac4321f0afc0a18bd8640a22640617b24" protocol=ttrpc version=3 Dec 12 18:24:46.052145 systemd[1]: Started cri-containerd-c1f876718e20520fab64927760795c6e0c23dab2f79c374e82f5c93aeb52e1ce.scope - libcontainer container c1f876718e20520fab64927760795c6e0c23dab2f79c374e82f5c93aeb52e1ce. Dec 12 18:24:46.111000 audit: BPF prog-id=184 op=LOAD Dec 12 18:24:46.114914 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 12 18:24:46.114976 kernel: audit: type=1334 audit(1765563886.111:584): prog-id=184 op=LOAD Dec 12 18:24:46.111000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3333 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:46.125541 kernel: audit: type=1300 audit(1765563886.111:584): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3333 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:46.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663837363731386532303532306661623634393237373630373935 Dec 12 18:24:46.137311 kernel: audit: type=1327 audit(1765563886.111:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663837363731386532303532306661623634393237373630373935 Dec 12 18:24:46.137376 kernel: audit: type=1334 audit(1765563886.111:585): prog-id=185 op=LOAD Dec 12 18:24:46.111000 audit: BPF prog-id=185 op=LOAD Dec 12 18:24:46.111000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3333 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:46.151910 kernel: audit: type=1300 audit(1765563886.111:585): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3333 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:46.151975 kernel: audit: type=1327 audit(1765563886.111:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663837363731386532303532306661623634393237373630373935 Dec 12 18:24:46.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663837363731386532303532306661623634393237373630373935 Dec 12 18:24:46.111000 audit: BPF prog-id=185 op=UNLOAD Dec 12 18:24:46.155296 kernel: audit: type=1334 audit(1765563886.111:586): prog-id=185 op=UNLOAD Dec 12 18:24:46.155333 kernel: audit: type=1300 audit(1765563886.111:586): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:46.111000 audit[3790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:46.162535 containerd[1630]: time="2025-12-12T18:24:46.162472701Z" level=info msg="StartContainer for \"c1f876718e20520fab64927760795c6e0c23dab2f79c374e82f5c93aeb52e1ce\" returns successfully" Dec 12 18:24:46.170274 kernel: audit: type=1327 audit(1765563886.111:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663837363731386532303532306661623634393237373630373935 Dec 12 18:24:46.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663837363731386532303532306661623634393237373630373935 Dec 12 18:24:46.172344 kernel: audit: type=1334 audit(1765563886.111:587): prog-id=184 op=UNLOAD Dec 12 18:24:46.111000 audit: BPF prog-id=184 op=UNLOAD Dec 12 18:24:46.111000 audit[3790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:46.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663837363731386532303532306661623634393237373630373935 Dec 12 18:24:46.112000 audit: BPF prog-id=186 op=LOAD Dec 12 18:24:46.112000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3333 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:46.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663837363731386532303532306661623634393237373630373935 Dec 12 18:24:46.246391 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:24:46.246472 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:24:46.412821 kubelet[2814]: I1212 18:24:46.411732 2814 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf8vg\" (UniqueName: \"kubernetes.io/projected/1b812c88-50ce-4177-8552-96cf1344574a-kube-api-access-hf8vg\") pod \"1b812c88-50ce-4177-8552-96cf1344574a\" (UID: \"1b812c88-50ce-4177-8552-96cf1344574a\") " Dec 12 18:24:46.412821 kubelet[2814]: I1212 18:24:46.411773 2814 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1b812c88-50ce-4177-8552-96cf1344574a-whisker-backend-key-pair\") pod \"1b812c88-50ce-4177-8552-96cf1344574a\" (UID: \"1b812c88-50ce-4177-8552-96cf1344574a\") " Dec 12 18:24:46.412821 kubelet[2814]: I1212 18:24:46.411792 2814 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b812c88-50ce-4177-8552-96cf1344574a-whisker-ca-bundle\") pod \"1b812c88-50ce-4177-8552-96cf1344574a\" (UID: \"1b812c88-50ce-4177-8552-96cf1344574a\") " Dec 12 18:24:46.413710 kubelet[2814]: I1212 18:24:46.413690 2814 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b812c88-50ce-4177-8552-96cf1344574a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1b812c88-50ce-4177-8552-96cf1344574a" (UID: "1b812c88-50ce-4177-8552-96cf1344574a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:24:46.416881 kubelet[2814]: I1212 18:24:46.416755 2814 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b812c88-50ce-4177-8552-96cf1344574a-kube-api-access-hf8vg" (OuterVolumeSpecName: "kube-api-access-hf8vg") pod "1b812c88-50ce-4177-8552-96cf1344574a" (UID: "1b812c88-50ce-4177-8552-96cf1344574a"). InnerVolumeSpecName "kube-api-access-hf8vg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:24:46.419179 kubelet[2814]: I1212 18:24:46.419131 2814 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b812c88-50ce-4177-8552-96cf1344574a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1b812c88-50ce-4177-8552-96cf1344574a" (UID: "1b812c88-50ce-4177-8552-96cf1344574a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:24:46.513330 kubelet[2814]: I1212 18:24:46.513277 2814 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1b812c88-50ce-4177-8552-96cf1344574a-whisker-backend-key-pair\") on node \"172-234-207-166\" DevicePath \"\"" Dec 12 18:24:46.513330 kubelet[2814]: I1212 18:24:46.513312 2814 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b812c88-50ce-4177-8552-96cf1344574a-whisker-ca-bundle\") on node \"172-234-207-166\" DevicePath \"\"" Dec 12 18:24:46.513330 kubelet[2814]: I1212 18:24:46.513332 2814 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hf8vg\" (UniqueName: \"kubernetes.io/projected/1b812c88-50ce-4177-8552-96cf1344574a-kube-api-access-hf8vg\") on node \"172-234-207-166\" DevicePath \"\"" Dec 12 18:24:46.527541 kubelet[2814]: E1212 18:24:46.527502 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:46.538908 systemd[1]: Removed slice kubepods-besteffort-pod1b812c88_50ce_4177_8552_96cf1344574a.slice - libcontainer container kubepods-besteffort-pod1b812c88_50ce_4177_8552_96cf1344574a.slice. Dec 12 18:24:46.559731 kubelet[2814]: I1212 18:24:46.559564 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bhtmb" podStartSLOduration=2.036966166 podStartE2EDuration="10.559550032s" podCreationTimestamp="2025-12-12 18:24:36 +0000 UTC" firstStartedPulling="2025-12-12 18:24:37.446571154 +0000 UTC m=+20.148807648" lastFinishedPulling="2025-12-12 18:24:45.96915502 +0000 UTC m=+28.671391514" observedRunningTime="2025-12-12 18:24:46.551741085 +0000 UTC m=+29.253977579" watchObservedRunningTime="2025-12-12 18:24:46.559550032 +0000 UTC m=+29.261786526" Dec 12 18:24:46.605701 systemd[1]: Created slice kubepods-besteffort-podc2a87dc3_7f3a_476b_8c32_9dcd8f2f92a4.slice - libcontainer container kubepods-besteffort-podc2a87dc3_7f3a_476b_8c32_9dcd8f2f92a4.slice. Dec 12 18:24:46.715096 kubelet[2814]: I1212 18:24:46.714924 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4-whisker-ca-bundle\") pod \"whisker-6755b8ddc4-gfq4r\" (UID: \"c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4\") " pod="calico-system/whisker-6755b8ddc4-gfq4r" Dec 12 18:24:46.715096 kubelet[2814]: I1212 18:24:46.715010 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4-whisker-backend-key-pair\") pod \"whisker-6755b8ddc4-gfq4r\" (UID: \"c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4\") " pod="calico-system/whisker-6755b8ddc4-gfq4r" Dec 12 18:24:46.715096 kubelet[2814]: I1212 18:24:46.715029 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcnj6\" (UniqueName: \"kubernetes.io/projected/c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4-kube-api-access-qcnj6\") pod \"whisker-6755b8ddc4-gfq4r\" (UID: \"c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4\") " pod="calico-system/whisker-6755b8ddc4-gfq4r" Dec 12 18:24:46.911806 containerd[1630]: time="2025-12-12T18:24:46.911752842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6755b8ddc4-gfq4r,Uid:c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:46.941284 systemd[1]: var-lib-kubelet-pods-1b812c88\x2d50ce\x2d4177\x2d8552\x2d96cf1344574a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhf8vg.mount: Deactivated successfully. Dec 12 18:24:46.942000 systemd[1]: var-lib-kubelet-pods-1b812c88\x2d50ce\x2d4177\x2d8552\x2d96cf1344574a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:24:47.069934 systemd-networkd[1517]: cali2c2a4b40340: Link UP Dec 12 18:24:47.070666 systemd-networkd[1517]: cali2c2a4b40340: Gained carrier Dec 12 18:24:47.087227 containerd[1630]: 2025-12-12 18:24:46.937 [INFO][3858] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:24:47.087227 containerd[1630]: 2025-12-12 18:24:46.982 [INFO][3858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0 whisker-6755b8ddc4- calico-system c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4 930 0 2025-12-12 18:24:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6755b8ddc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-207-166 whisker-6755b8ddc4-gfq4r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2c2a4b40340 [] [] }} ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Namespace="calico-system" Pod="whisker-6755b8ddc4-gfq4r" WorkloadEndpoint="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-" Dec 12 18:24:47.087227 containerd[1630]: 2025-12-12 18:24:46.982 [INFO][3858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Namespace="calico-system" Pod="whisker-6755b8ddc4-gfq4r" WorkloadEndpoint="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" Dec 12 18:24:47.087227 containerd[1630]: 2025-12-12 18:24:47.012 [INFO][3869] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" HandleID="k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Workload="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.012 [INFO][3869] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" HandleID="k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Workload="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-207-166", "pod":"whisker-6755b8ddc4-gfq4r", "timestamp":"2025-12-12 18:24:47.012206887 +0000 UTC"}, Hostname:"172-234-207-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.012 [INFO][3869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.012 [INFO][3869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.012 [INFO][3869] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-207-166' Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.021 [INFO][3869] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" host="172-234-207-166" Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.027 [INFO][3869] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-207-166" Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.032 [INFO][3869] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.034 [INFO][3869] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:47.087876 containerd[1630]: 2025-12-12 18:24:47.036 [INFO][3869] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:47.088091 containerd[1630]: 2025-12-12 18:24:47.036 [INFO][3869] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" host="172-234-207-166" Dec 12 18:24:47.088091 containerd[1630]: 2025-12-12 18:24:47.038 [INFO][3869] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27 Dec 12 18:24:47.088091 containerd[1630]: 2025-12-12 18:24:47.045 [INFO][3869] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" host="172-234-207-166" Dec 12 18:24:47.088091 containerd[1630]: 2025-12-12 18:24:47.052 [INFO][3869] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.193/26] block=192.168.125.192/26 handle="k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" host="172-234-207-166" Dec 12 18:24:47.088091 containerd[1630]: 2025-12-12 18:24:47.052 [INFO][3869] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.193/26] handle="k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" host="172-234-207-166" Dec 12 18:24:47.088091 containerd[1630]: 2025-12-12 18:24:47.052 [INFO][3869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:24:47.088091 containerd[1630]: 2025-12-12 18:24:47.052 [INFO][3869] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.193/26] IPv6=[] ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" HandleID="k8s-pod-network.b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Workload="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" Dec 12 18:24:47.088227 containerd[1630]: 2025-12-12 18:24:47.057 [INFO][3858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Namespace="calico-system" Pod="whisker-6755b8ddc4-gfq4r" WorkloadEndpoint="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0", GenerateName:"whisker-6755b8ddc4-", Namespace:"calico-system", SelfLink:"", UID:"c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6755b8ddc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"", Pod:"whisker-6755b8ddc4-gfq4r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2c2a4b40340", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:47.088227 containerd[1630]: 2025-12-12 18:24:47.057 [INFO][3858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.193/32] ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Namespace="calico-system" Pod="whisker-6755b8ddc4-gfq4r" WorkloadEndpoint="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" Dec 12 18:24:47.088298 containerd[1630]: 2025-12-12 18:24:47.057 [INFO][3858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c2a4b40340 ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Namespace="calico-system" Pod="whisker-6755b8ddc4-gfq4r" WorkloadEndpoint="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" Dec 12 18:24:47.088298 containerd[1630]: 2025-12-12 18:24:47.071 [INFO][3858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Namespace="calico-system" Pod="whisker-6755b8ddc4-gfq4r" WorkloadEndpoint="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" Dec 12 18:24:47.088527 containerd[1630]: 2025-12-12 18:24:47.072 [INFO][3858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Namespace="calico-system" Pod="whisker-6755b8ddc4-gfq4r" WorkloadEndpoint="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0", GenerateName:"whisker-6755b8ddc4-", Namespace:"calico-system", SelfLink:"", UID:"c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6755b8ddc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27", Pod:"whisker-6755b8ddc4-gfq4r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2c2a4b40340", MAC:"d6:21:39:78:c3:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:47.088583 containerd[1630]: 2025-12-12 18:24:47.083 [INFO][3858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" Namespace="calico-system" Pod="whisker-6755b8ddc4-gfq4r" WorkloadEndpoint="172--234--207--166-k8s-whisker--6755b8ddc4--gfq4r-eth0" Dec 12 18:24:47.132095 containerd[1630]: time="2025-12-12T18:24:47.131321070Z" level=info msg="connecting to shim b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27" address="unix:///run/containerd/s/4badd9301c5db0e97012dd9dac024117006a76c236983994f22747c61bbe4ba8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:47.161175 systemd[1]: Started cri-containerd-b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27.scope - libcontainer container b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27. Dec 12 18:24:47.171000 audit: BPF prog-id=187 op=LOAD Dec 12 18:24:47.171000 audit: BPF prog-id=188 op=LOAD Dec 12 18:24:47.171000 audit[3904]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3891 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613633613430323633383639656535626234366330343136303130 Dec 12 18:24:47.171000 audit: BPF prog-id=188 op=UNLOAD Dec 12 18:24:47.171000 audit[3904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3891 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613633613430323633383639656535626234366330343136303130 Dec 12 18:24:47.171000 audit: BPF prog-id=189 op=LOAD Dec 12 18:24:47.171000 audit[3904]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3891 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613633613430323633383639656535626234366330343136303130 Dec 12 18:24:47.172000 audit: BPF prog-id=190 op=LOAD Dec 12 18:24:47.172000 audit[3904]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3891 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613633613430323633383639656535626234366330343136303130 Dec 12 18:24:47.172000 audit: BPF prog-id=190 op=UNLOAD Dec 12 18:24:47.172000 audit[3904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3891 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613633613430323633383639656535626234366330343136303130 Dec 12 18:24:47.172000 audit: BPF prog-id=189 op=UNLOAD Dec 12 18:24:47.172000 audit[3904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3891 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613633613430323633383639656535626234366330343136303130 Dec 12 18:24:47.172000 audit: BPF prog-id=191 op=LOAD Dec 12 18:24:47.172000 audit[3904]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3891 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613633613430323633383639656535626234366330343136303130 Dec 12 18:24:47.212338 containerd[1630]: time="2025-12-12T18:24:47.212270139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6755b8ddc4-gfq4r,Uid:c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2a63a40263869ee5bb46c04160102c24771c2753a4696ed0c281785774efb27\"" Dec 12 18:24:47.214279 containerd[1630]: time="2025-12-12T18:24:47.214234528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:24:47.342242 containerd[1630]: time="2025-12-12T18:24:47.342068218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:24:47.343310 containerd[1630]: time="2025-12-12T18:24:47.343172157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:24:47.343828 containerd[1630]: time="2025-12-12T18:24:47.343615766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:47.344073 kubelet[2814]: E1212 18:24:47.343956 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:24:47.344146 kubelet[2814]: E1212 18:24:47.344081 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:24:47.344222 kubelet[2814]: E1212 18:24:47.344189 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:24:47.346758 containerd[1630]: time="2025-12-12T18:24:47.346712196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:24:47.396578 kubelet[2814]: I1212 18:24:47.396539 2814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b812c88-50ce-4177-8552-96cf1344574a" path="/var/lib/kubelet/pods/1b812c88-50ce-4177-8552-96cf1344574a/volumes" Dec 12 18:24:47.472386 containerd[1630]: time="2025-12-12T18:24:47.472314867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:24:47.473494 containerd[1630]: time="2025-12-12T18:24:47.473450366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:24:47.474016 containerd[1630]: time="2025-12-12T18:24:47.473520765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:47.474064 kubelet[2814]: E1212 18:24:47.473668 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:24:47.474064 kubelet[2814]: E1212 18:24:47.473719 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:24:47.474064 kubelet[2814]: E1212 18:24:47.473824 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:24:47.474064 kubelet[2814]: E1212 18:24:47.473874 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:24:47.530173 kubelet[2814]: I1212 18:24:47.530130 2814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:24:47.531499 kubelet[2814]: E1212 18:24:47.531435 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:47.533276 kubelet[2814]: E1212 18:24:47.533200 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:24:47.664000 audit[3941]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:47.664000 audit[3941]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeec064d70 a2=0 a3=7ffeec064d5c items=0 ppid=2925 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:47.668000 audit[3941]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:47.668000 audit[3941]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeec064d70 a2=0 a3=0 items=0 ppid=2925 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:47.668000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:48.534333 kubelet[2814]: E1212 18:24:48.534260 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:24:48.580348 systemd-networkd[1517]: cali2c2a4b40340: Gained IPv6LL Dec 12 18:24:53.397032 containerd[1630]: time="2025-12-12T18:24:53.396791323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xnbvh,Uid:91aeba92-11d6-4129-85e3-7dedd0625bf3,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:53.400757 containerd[1630]: time="2025-12-12T18:24:53.400236263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49f44685-wf2rc,Uid:3f177278-7ed2-426c-a0d0-27da05aa7f69,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:24:53.402670 kubelet[2814]: E1212 18:24:53.401595 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:53.405027 containerd[1630]: time="2025-12-12T18:24:53.401929793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dt25m,Uid:4664a851-47ee-4ce8-a989-b4c3338ff2f7,Namespace:kube-system,Attempt:0,}" Dec 12 18:24:53.562540 systemd-networkd[1517]: cali5f0861f39ee: Link UP Dec 12 18:24:53.564740 systemd-networkd[1517]: cali5f0861f39ee: Gained carrier Dec 12 18:24:53.591540 containerd[1630]: 2025-12-12 18:24:53.452 [INFO][4140] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:24:53.591540 containerd[1630]: 2025-12-12 18:24:53.465 [INFO][4140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0 calico-apiserver-5d49f44685- calico-apiserver 3f177278-7ed2-426c-a0d0-27da05aa7f69 864 0 2025-12-12 18:24:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d49f44685 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-207-166 calico-apiserver-5d49f44685-wf2rc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f0861f39ee [] [] }} ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-wf2rc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-" Dec 12 18:24:53.591540 containerd[1630]: 2025-12-12 18:24:53.465 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-wf2rc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" Dec 12 18:24:53.591540 containerd[1630]: 2025-12-12 18:24:53.504 [INFO][4168] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" HandleID="k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Workload="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.505 [INFO][4168] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" HandleID="k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Workload="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-207-166", "pod":"calico-apiserver-5d49f44685-wf2rc", "timestamp":"2025-12-12 18:24:53.504678912 +0000 UTC"}, Hostname:"172-234-207-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.506 [INFO][4168] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.506 [INFO][4168] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.506 [INFO][4168] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-207-166' Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.519 [INFO][4168] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" host="172-234-207-166" Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.524 [INFO][4168] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-207-166" Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.529 [INFO][4168] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.532 [INFO][4168] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.591846 containerd[1630]: 2025-12-12 18:24:53.535 [INFO][4168] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.592396 containerd[1630]: 2025-12-12 18:24:53.535 [INFO][4168] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" host="172-234-207-166" Dec 12 18:24:53.592396 containerd[1630]: 2025-12-12 18:24:53.538 [INFO][4168] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c Dec 12 18:24:53.592396 containerd[1630]: 2025-12-12 18:24:53.545 [INFO][4168] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" host="172-234-207-166" Dec 12 18:24:53.592396 containerd[1630]: 2025-12-12 18:24:53.552 [INFO][4168] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.194/26] block=192.168.125.192/26 handle="k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" host="172-234-207-166" Dec 12 18:24:53.592396 containerd[1630]: 2025-12-12 18:24:53.552 [INFO][4168] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.194/26] handle="k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" host="172-234-207-166" Dec 12 18:24:53.592396 containerd[1630]: 2025-12-12 18:24:53.552 [INFO][4168] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:24:53.592396 containerd[1630]: 2025-12-12 18:24:53.552 [INFO][4168] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.194/26] IPv6=[] ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" HandleID="k8s-pod-network.c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Workload="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" Dec 12 18:24:53.592559 containerd[1630]: 2025-12-12 18:24:53.555 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-wf2rc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0", GenerateName:"calico-apiserver-5d49f44685-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f177278-7ed2-426c-a0d0-27da05aa7f69", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d49f44685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"", Pod:"calico-apiserver-5d49f44685-wf2rc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f0861f39ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:53.592637 containerd[1630]: 2025-12-12 18:24:53.556 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.194/32] ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-wf2rc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" Dec 12 18:24:53.592637 containerd[1630]: 2025-12-12 18:24:53.556 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f0861f39ee ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-wf2rc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" Dec 12 18:24:53.592637 containerd[1630]: 2025-12-12 18:24:53.569 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-wf2rc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" Dec 12 18:24:53.592736 containerd[1630]: 2025-12-12 18:24:53.572 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-wf2rc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0", GenerateName:"calico-apiserver-5d49f44685-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f177278-7ed2-426c-a0d0-27da05aa7f69", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d49f44685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c", Pod:"calico-apiserver-5d49f44685-wf2rc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f0861f39ee", MAC:"6e:fe:f0:ac:16:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:53.592910 containerd[1630]: 2025-12-12 18:24:53.584 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-wf2rc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--wf2rc-eth0" Dec 12 18:24:53.618043 containerd[1630]: time="2025-12-12T18:24:53.617097930Z" level=info msg="connecting to shim c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c" address="unix:///run/containerd/s/07ef0b557bbd01b9fe03f00ac6043923078278616e02907dc64b5f3eaaa008dc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:53.656391 systemd[1]: Started cri-containerd-c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c.scope - libcontainer container c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c. Dec 12 18:24:53.670349 systemd-networkd[1517]: cali77cb80178d0: Link UP Dec 12 18:24:53.671957 systemd-networkd[1517]: cali77cb80178d0: Gained carrier Dec 12 18:24:53.690066 containerd[1630]: 2025-12-12 18:24:53.481 [INFO][4135] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:24:53.690066 containerd[1630]: 2025-12-12 18:24:53.497 [INFO][4135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--207--166-k8s-csi--node--driver--xnbvh-eth0 csi-node-driver- calico-system 91aeba92-11d6-4129-85e3-7dedd0625bf3 766 0 2025-12-12 18:24:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-207-166 csi-node-driver-xnbvh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali77cb80178d0 [] [] }} ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Namespace="calico-system" Pod="csi-node-driver-xnbvh" WorkloadEndpoint="172--234--207--166-k8s-csi--node--driver--xnbvh-" Dec 12 18:24:53.690066 containerd[1630]: 2025-12-12 18:24:53.498 [INFO][4135] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Namespace="calico-system" Pod="csi-node-driver-xnbvh" WorkloadEndpoint="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" Dec 12 18:24:53.690066 containerd[1630]: 2025-12-12 18:24:53.547 [INFO][4179] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" HandleID="k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Workload="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.547 [INFO][4179] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" HandleID="k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Workload="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f130), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-207-166", "pod":"csi-node-driver-xnbvh", "timestamp":"2025-12-12 18:24:53.547456707 +0000 UTC"}, Hostname:"172-234-207-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.547 [INFO][4179] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.552 [INFO][4179] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.552 [INFO][4179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-207-166' Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.621 [INFO][4179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" host="172-234-207-166" Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.633 [INFO][4179] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-207-166" Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.639 [INFO][4179] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.642 [INFO][4179] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.644 [INFO][4179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.690381 containerd[1630]: 2025-12-12 18:24:53.644 [INFO][4179] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" host="172-234-207-166" Dec 12 18:24:53.690632 containerd[1630]: 2025-12-12 18:24:53.646 [INFO][4179] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172 Dec 12 18:24:53.690632 containerd[1630]: 2025-12-12 18:24:53.649 [INFO][4179] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" host="172-234-207-166" Dec 12 18:24:53.690632 containerd[1630]: 2025-12-12 18:24:53.656 [INFO][4179] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.195/26] block=192.168.125.192/26 handle="k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" host="172-234-207-166" Dec 12 18:24:53.690632 containerd[1630]: 2025-12-12 18:24:53.656 [INFO][4179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.195/26] handle="k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" host="172-234-207-166" Dec 12 18:24:53.690632 containerd[1630]: 2025-12-12 18:24:53.656 [INFO][4179] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:24:53.690632 containerd[1630]: 2025-12-12 18:24:53.656 [INFO][4179] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.195/26] IPv6=[] ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" HandleID="k8s-pod-network.19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Workload="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" Dec 12 18:24:53.690768 containerd[1630]: 2025-12-12 18:24:53.661 [INFO][4135] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Namespace="calico-system" Pod="csi-node-driver-xnbvh" WorkloadEndpoint="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-csi--node--driver--xnbvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91aeba92-11d6-4129-85e3-7dedd0625bf3", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"", Pod:"csi-node-driver-xnbvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali77cb80178d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:53.690835 containerd[1630]: 2025-12-12 18:24:53.661 [INFO][4135] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.195/32] ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Namespace="calico-system" Pod="csi-node-driver-xnbvh" WorkloadEndpoint="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" Dec 12 18:24:53.690835 containerd[1630]: 2025-12-12 18:24:53.661 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77cb80178d0 ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Namespace="calico-system" Pod="csi-node-driver-xnbvh" WorkloadEndpoint="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" Dec 12 18:24:53.690835 containerd[1630]: 2025-12-12 18:24:53.674 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Namespace="calico-system" Pod="csi-node-driver-xnbvh" WorkloadEndpoint="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" Dec 12 18:24:53.690910 containerd[1630]: 2025-12-12 18:24:53.675 [INFO][4135] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Namespace="calico-system" Pod="csi-node-driver-xnbvh" WorkloadEndpoint="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-csi--node--driver--xnbvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91aeba92-11d6-4129-85e3-7dedd0625bf3", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172", Pod:"csi-node-driver-xnbvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali77cb80178d0", MAC:"4e:aa:20:57:8f:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:53.690972 containerd[1630]: 2025-12-12 18:24:53.685 [INFO][4135] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" Namespace="calico-system" Pod="csi-node-driver-xnbvh" WorkloadEndpoint="172--234--207--166-k8s-csi--node--driver--xnbvh-eth0" Dec 12 18:24:53.697000 audit: BPF prog-id=192 op=LOAD Dec 12 18:24:53.702109 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 12 18:24:53.702194 kernel: audit: type=1334 audit(1765563893.697:599): prog-id=192 op=LOAD Dec 12 18:24:53.701000 audit: BPF prog-id=193 op=LOAD Dec 12 18:24:53.714146 kernel: audit: type=1334 audit(1765563893.701:600): prog-id=193 op=LOAD Dec 12 18:24:53.714219 kernel: audit: type=1300 audit(1765563893.701:600): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.714249 kernel: audit: type=1327 audit(1765563893.701:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.701000 audit[4220]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.701000 audit: BPF prog-id=193 op=UNLOAD Dec 12 18:24:53.732971 kernel: audit: type=1334 audit(1765563893.701:601): prog-id=193 op=UNLOAD Dec 12 18:24:53.733094 kernel: audit: type=1300 audit(1765563893.701:601): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.701000 audit[4220]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.743756 kernel: audit: type=1327 audit(1765563893.701:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.743813 kernel: audit: type=1334 audit(1765563893.701:602): prog-id=194 op=LOAD Dec 12 18:24:53.701000 audit: BPF prog-id=194 op=LOAD Dec 12 18:24:53.751055 kernel: audit: type=1300 audit(1765563893.701:602): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.701000 audit[4220]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.761412 kernel: audit: type=1327 audit(1765563893.701:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.701000 audit: BPF prog-id=195 op=LOAD Dec 12 18:24:53.701000 audit[4220]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.701000 audit: BPF prog-id=195 op=UNLOAD Dec 12 18:24:53.701000 audit[4220]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.701000 audit: BPF prog-id=194 op=UNLOAD Dec 12 18:24:53.701000 audit[4220]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.701000 audit: BPF prog-id=196 op=LOAD Dec 12 18:24:53.701000 audit[4220]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4209 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337333332343332613533653632363662343637356339396166623864 Dec 12 18:24:53.765245 containerd[1630]: time="2025-12-12T18:24:53.765182774Z" level=info msg="connecting to shim 19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172" address="unix:///run/containerd/s/10157594d90e679bc685acee2a77810d1c8bdf1830fb816df79d10799759e5ae" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:53.847695 systemd[1]: Started cri-containerd-19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172.scope - libcontainer container 19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172. Dec 12 18:24:53.859100 containerd[1630]: time="2025-12-12T18:24:53.858106584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49f44685-wf2rc,Uid:3f177278-7ed2-426c-a0d0-27da05aa7f69,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c7332432a53e6266b4675c99afb8d16bdeda51dec2def325607d483bee0c549c\"" Dec 12 18:24:53.861859 containerd[1630]: time="2025-12-12T18:24:53.861806703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:24:53.882065 systemd-networkd[1517]: calib3a7f50e26e: Link UP Dec 12 18:24:53.888495 systemd-networkd[1517]: calib3a7f50e26e: Gained carrier Dec 12 18:24:53.905000 audit: BPF prog-id=197 op=LOAD Dec 12 18:24:53.906000 audit: BPF prog-id=198 op=LOAD Dec 12 18:24:53.906000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4253 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139633466663232313335373066366234323736666466633931633535 Dec 12 18:24:53.906000 audit: BPF prog-id=198 op=UNLOAD Dec 12 18:24:53.906000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4253 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139633466663232313335373066366234323736666466633931633535 Dec 12 18:24:53.906000 audit: BPF prog-id=199 op=LOAD Dec 12 18:24:53.906000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4253 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139633466663232313335373066366234323736666466633931633535 Dec 12 18:24:53.907000 audit: BPF prog-id=200 op=LOAD Dec 12 18:24:53.907000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4253 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139633466663232313335373066366234323736666466633931633535 Dec 12 18:24:53.907000 audit: BPF prog-id=200 op=UNLOAD Dec 12 18:24:53.907000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4253 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139633466663232313335373066366234323736666466633931633535 Dec 12 18:24:53.907000 audit: BPF prog-id=199 op=UNLOAD Dec 12 18:24:53.907000 audit[4267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4253 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139633466663232313335373066366234323736666466633931633535 Dec 12 18:24:53.907000 audit: BPF prog-id=201 op=LOAD Dec 12 18:24:53.907000 audit[4267]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4253 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139633466663232313335373066366234323736666466633931633535 Dec 12 18:24:53.910688 kubelet[2814]: I1212 18:24:53.910499 2814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:24:53.914819 kubelet[2814]: E1212 18:24:53.914787 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:53.922005 containerd[1630]: 2025-12-12 18:24:53.474 [INFO][4146] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:24:53.922005 containerd[1630]: 2025-12-12 18:24:53.490 [INFO][4146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0 coredns-66bc5c9577- kube-system 4664a851-47ee-4ce8-a989-b4c3338ff2f7 857 0 2025-12-12 18:24:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-207-166 coredns-66bc5c9577-dt25m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib3a7f50e26e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Namespace="kube-system" Pod="coredns-66bc5c9577-dt25m" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-" Dec 12 18:24:53.922005 containerd[1630]: 2025-12-12 18:24:53.490 [INFO][4146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Namespace="kube-system" Pod="coredns-66bc5c9577-dt25m" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" Dec 12 18:24:53.922005 containerd[1630]: 2025-12-12 18:24:53.549 [INFO][4177] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" HandleID="k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Workload="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.550 [INFO][4177] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" HandleID="k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Workload="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-207-166", "pod":"coredns-66bc5c9577-dt25m", "timestamp":"2025-12-12 18:24:53.549802157 +0000 UTC"}, Hostname:"172-234-207-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.550 [INFO][4177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.658 [INFO][4177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.658 [INFO][4177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-207-166' Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.721 [INFO][4177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" host="172-234-207-166" Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.746 [INFO][4177] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-207-166" Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.773 [INFO][4177] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.784 [INFO][4177] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.922263 containerd[1630]: 2025-12-12 18:24:53.795 [INFO][4177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:53.922460 containerd[1630]: 2025-12-12 18:24:53.795 [INFO][4177] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" host="172-234-207-166" Dec 12 18:24:53.922460 containerd[1630]: 2025-12-12 18:24:53.808 [INFO][4177] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d Dec 12 18:24:53.922460 containerd[1630]: 2025-12-12 18:24:53.819 [INFO][4177] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" host="172-234-207-166" Dec 12 18:24:53.922460 containerd[1630]: 2025-12-12 18:24:53.855 [INFO][4177] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.196/26] block=192.168.125.192/26 handle="k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" host="172-234-207-166" Dec 12 18:24:53.922460 containerd[1630]: 2025-12-12 18:24:53.856 [INFO][4177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.196/26] handle="k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" host="172-234-207-166" Dec 12 18:24:53.922460 containerd[1630]: 2025-12-12 18:24:53.856 [INFO][4177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:24:53.922460 containerd[1630]: 2025-12-12 18:24:53.857 [INFO][4177] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.196/26] IPv6=[] ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" HandleID="k8s-pod-network.bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Workload="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" Dec 12 18:24:53.922605 containerd[1630]: 2025-12-12 18:24:53.869 [INFO][4146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Namespace="kube-system" Pod="coredns-66bc5c9577-dt25m" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4664a851-47ee-4ce8-a989-b4c3338ff2f7", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"", Pod:"coredns-66bc5c9577-dt25m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3a7f50e26e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:53.922605 containerd[1630]: 2025-12-12 18:24:53.870 [INFO][4146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.196/32] ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Namespace="kube-system" Pod="coredns-66bc5c9577-dt25m" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" Dec 12 18:24:53.922605 containerd[1630]: 2025-12-12 18:24:53.871 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3a7f50e26e ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Namespace="kube-system" Pod="coredns-66bc5c9577-dt25m" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" Dec 12 18:24:53.922605 containerd[1630]: 2025-12-12 18:24:53.895 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Namespace="kube-system" Pod="coredns-66bc5c9577-dt25m" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" Dec 12 18:24:53.922605 containerd[1630]: 2025-12-12 18:24:53.895 [INFO][4146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Namespace="kube-system" Pod="coredns-66bc5c9577-dt25m" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4664a851-47ee-4ce8-a989-b4c3338ff2f7", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d", Pod:"coredns-66bc5c9577-dt25m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3a7f50e26e", MAC:"36:b1:23:11:8a:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:53.922605 containerd[1630]: 2025-12-12 18:24:53.911 [INFO][4146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" Namespace="kube-system" Pod="coredns-66bc5c9577-dt25m" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--dt25m-eth0" Dec 12 18:24:53.971245 containerd[1630]: time="2025-12-12T18:24:53.971100822Z" level=info msg="connecting to shim bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d" address="unix:///run/containerd/s/407ef2747e86ac5b94539380ae956ff68e4992a9163d687bb66f0a7cb1ad1080" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:53.977586 containerd[1630]: time="2025-12-12T18:24:53.977546781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xnbvh,Uid:91aeba92-11d6-4129-85e3-7dedd0625bf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"19c4ff2213570f6b4276fdfc91c55b2be7c6cb61905953cb0680aa36a7708172\"" Dec 12 18:24:53.988000 audit[4327]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:53.988000 audit[4327]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffefbbcd0f0 a2=0 a3=7ffefbbcd0dc items=0 ppid=2925 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:53.992000 audit[4327]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:53.992000 audit[4327]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffefbbcd0f0 a2=0 a3=7ffefbbcd0dc items=0 ppid=2925 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:53.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:54.001758 containerd[1630]: time="2025-12-12T18:24:54.001701899Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:24:54.004751 containerd[1630]: time="2025-12-12T18:24:54.004084918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:24:54.004751 containerd[1630]: time="2025-12-12T18:24:54.004141768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:54.004854 kubelet[2814]: E1212 18:24:54.004238 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:24:54.004854 kubelet[2814]: E1212 18:24:54.004280 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:24:54.004854 kubelet[2814]: E1212 18:24:54.004463 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-wf2rc_calico-apiserver(3f177278-7ed2-426c-a0d0-27da05aa7f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:24:54.004854 kubelet[2814]: E1212 18:24:54.004505 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:24:54.005274 containerd[1630]: time="2025-12-12T18:24:54.005258669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:24:54.008552 systemd[1]: Started cri-containerd-bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d.scope - libcontainer container bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d. Dec 12 18:24:54.022000 audit: BPF prog-id=202 op=LOAD Dec 12 18:24:54.023000 audit: BPF prog-id=203 op=LOAD Dec 12 18:24:54.023000 audit[4326]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4313 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264336561343934393035336331653234623463353434303733616366 Dec 12 18:24:54.023000 audit: BPF prog-id=203 op=UNLOAD Dec 12 18:24:54.023000 audit[4326]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4313 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264336561343934393035336331653234623463353434303733616366 Dec 12 18:24:54.023000 audit: BPF prog-id=204 op=LOAD Dec 12 18:24:54.023000 audit[4326]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4313 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264336561343934393035336331653234623463353434303733616366 Dec 12 18:24:54.023000 audit: BPF prog-id=205 op=LOAD Dec 12 18:24:54.023000 audit[4326]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4313 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264336561343934393035336331653234623463353434303733616366 Dec 12 18:24:54.024000 audit: BPF prog-id=205 op=UNLOAD Dec 12 18:24:54.024000 audit[4326]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4313 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264336561343934393035336331653234623463353434303733616366 Dec 12 18:24:54.024000 audit: BPF prog-id=204 op=UNLOAD Dec 12 18:24:54.024000 audit[4326]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4313 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264336561343934393035336331653234623463353434303733616366 Dec 12 18:24:54.024000 audit: BPF prog-id=206 op=LOAD Dec 12 18:24:54.024000 audit[4326]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4313 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264336561343934393035336331653234623463353434303733616366 Dec 12 18:24:54.065697 containerd[1630]: time="2025-12-12T18:24:54.065582105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dt25m,Uid:4664a851-47ee-4ce8-a989-b4c3338ff2f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d\"" Dec 12 18:24:54.066920 kubelet[2814]: E1212 18:24:54.066893 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:54.070587 containerd[1630]: time="2025-12-12T18:24:54.070561235Z" level=info msg="CreateContainer within sandbox \"bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:24:54.079062 containerd[1630]: time="2025-12-12T18:24:54.079036164Z" level=info msg="Container 122a675565df980373ec13183769e2edb66361ccf7996709ebf5821174c5893a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:54.083242 containerd[1630]: time="2025-12-12T18:24:54.083197624Z" level=info msg="CreateContainer within sandbox \"bd3ea4949053c1e24b4c544073acf627dada22300fbd3c73cfe00f8929a84f5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"122a675565df980373ec13183769e2edb66361ccf7996709ebf5821174c5893a\"" Dec 12 18:24:54.083927 containerd[1630]: time="2025-12-12T18:24:54.083891163Z" level=info msg="StartContainer for \"122a675565df980373ec13183769e2edb66361ccf7996709ebf5821174c5893a\"" Dec 12 18:24:54.085041 containerd[1630]: time="2025-12-12T18:24:54.084975914Z" level=info msg="connecting to shim 122a675565df980373ec13183769e2edb66361ccf7996709ebf5821174c5893a" address="unix:///run/containerd/s/407ef2747e86ac5b94539380ae956ff68e4992a9163d687bb66f0a7cb1ad1080" protocol=ttrpc version=3 Dec 12 18:24:54.116175 systemd[1]: Started cri-containerd-122a675565df980373ec13183769e2edb66361ccf7996709ebf5821174c5893a.scope - libcontainer container 122a675565df980373ec13183769e2edb66361ccf7996709ebf5821174c5893a. Dec 12 18:24:54.139967 containerd[1630]: time="2025-12-12T18:24:54.139919009Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:24:54.141407 containerd[1630]: time="2025-12-12T18:24:54.141357610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:24:54.140000 audit: BPF prog-id=207 op=LOAD Dec 12 18:24:54.141565 containerd[1630]: time="2025-12-12T18:24:54.141547929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:54.142039 kubelet[2814]: E1212 18:24:54.141899 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:24:54.143015 kubelet[2814]: E1212 18:24:54.142172 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:24:54.143392 kubelet[2814]: E1212 18:24:54.143255 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:24:54.141000 audit: BPF prog-id=208 op=LOAD Dec 12 18:24:54.141000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4313 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326136373535363564663938303337336563313331383337363965 Dec 12 18:24:54.141000 audit: BPF prog-id=208 op=UNLOAD Dec 12 18:24:54.141000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4313 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326136373535363564663938303337336563313331383337363965 Dec 12 18:24:54.141000 audit: BPF prog-id=209 op=LOAD Dec 12 18:24:54.141000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4313 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326136373535363564663938303337336563313331383337363965 Dec 12 18:24:54.141000 audit: BPF prog-id=210 op=LOAD Dec 12 18:24:54.141000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4313 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326136373535363564663938303337336563313331383337363965 Dec 12 18:24:54.141000 audit: BPF prog-id=210 op=UNLOAD Dec 12 18:24:54.141000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4313 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326136373535363564663938303337336563313331383337363965 Dec 12 18:24:54.141000 audit: BPF prog-id=209 op=UNLOAD Dec 12 18:24:54.141000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4313 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326136373535363564663938303337336563313331383337363965 Dec 12 18:24:54.141000 audit: BPF prog-id=211 op=LOAD Dec 12 18:24:54.141000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4313 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326136373535363564663938303337336563313331383337363965 Dec 12 18:24:54.147628 containerd[1630]: time="2025-12-12T18:24:54.147514919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:24:54.190376 containerd[1630]: time="2025-12-12T18:24:54.189376176Z" level=info msg="StartContainer for \"122a675565df980373ec13183769e2edb66361ccf7996709ebf5821174c5893a\" returns successfully" Dec 12 18:24:54.285449 containerd[1630]: time="2025-12-12T18:24:54.285386130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:24:54.286422 containerd[1630]: time="2025-12-12T18:24:54.286386999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:24:54.286571 containerd[1630]: time="2025-12-12T18:24:54.286502230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:54.286912 kubelet[2814]: E1212 18:24:54.286836 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:24:54.286912 kubelet[2814]: E1212 18:24:54.286892 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:24:54.287573 kubelet[2814]: E1212 18:24:54.287078 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:24:54.287573 kubelet[2814]: E1212 18:24:54.287125 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:24:54.307000 audit: BPF prog-id=212 op=LOAD Dec 12 18:24:54.307000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeea7033f0 a2=98 a3=1fffffffffffffff items=0 ppid=4364 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.307000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:24:54.307000 audit: BPF prog-id=212 op=UNLOAD Dec 12 18:24:54.307000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeea7033c0 a3=0 items=0 ppid=4364 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.307000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:24:54.307000 audit: BPF prog-id=213 op=LOAD Dec 12 18:24:54.307000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeea7032d0 a2=94 a3=3 items=0 ppid=4364 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.307000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:24:54.307000 audit: BPF prog-id=213 op=UNLOAD Dec 12 18:24:54.307000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeea7032d0 a2=94 a3=3 items=0 ppid=4364 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.307000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:24:54.307000 audit: BPF prog-id=214 op=LOAD Dec 12 18:24:54.307000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeea703310 a2=94 a3=7ffeea7034f0 items=0 ppid=4364 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.307000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:24:54.307000 audit: BPF prog-id=214 op=UNLOAD Dec 12 18:24:54.307000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeea703310 a2=94 a3=7ffeea7034f0 items=0 ppid=4364 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.307000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:24:54.310000 audit: BPF prog-id=215 op=LOAD Dec 12 18:24:54.310000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffffd0c840 a2=98 a3=3 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.310000 audit: BPF prog-id=215 op=UNLOAD Dec 12 18:24:54.310000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffffd0c810 a3=0 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.310000 audit: BPF prog-id=216 op=LOAD Dec 12 18:24:54.310000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffffd0c630 a2=94 a3=54428f items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.310000 audit: BPF prog-id=216 op=UNLOAD Dec 12 18:24:54.310000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffffd0c630 a2=94 a3=54428f items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.310000 audit: BPF prog-id=217 op=LOAD Dec 12 18:24:54.310000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffffd0c660 a2=94 a3=2 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.310000 audit: BPF prog-id=217 op=UNLOAD Dec 12 18:24:54.310000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffffd0c660 a2=0 a3=2 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.310000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.550828 kubelet[2814]: E1212 18:24:54.550711 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:24:54.555202 kubelet[2814]: E1212 18:24:54.554928 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:54.560123 kubelet[2814]: E1212 18:24:54.560068 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:54.561517 kubelet[2814]: E1212 18:24:54.561401 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:24:54.618000 audit: BPF prog-id=218 op=LOAD Dec 12 18:24:54.618000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffffd0c520 a2=94 a3=1 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.618000 audit: BPF prog-id=218 op=UNLOAD Dec 12 18:24:54.618000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffffd0c520 a2=94 a3=1 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.629000 audit: BPF prog-id=219 op=LOAD Dec 12 18:24:54.629000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffffd0c510 a2=94 a3=4 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.629000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.629000 audit: BPF prog-id=219 op=UNLOAD Dec 12 18:24:54.629000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffffd0c510 a2=0 a3=4 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.629000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.629000 audit: BPF prog-id=220 op=LOAD Dec 12 18:24:54.629000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffffd0c370 a2=94 a3=5 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.629000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.630000 audit: BPF prog-id=220 op=UNLOAD Dec 12 18:24:54.630000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffffd0c370 a2=0 a3=5 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.630000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.630000 audit: BPF prog-id=221 op=LOAD Dec 12 18:24:54.630000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffffd0c590 a2=94 a3=6 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.630000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.631000 audit: BPF prog-id=221 op=UNLOAD Dec 12 18:24:54.631000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffffd0c590 a2=0 a3=6 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.631000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.631000 audit: BPF prog-id=222 op=LOAD Dec 12 18:24:54.631000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffffd0bd40 a2=94 a3=88 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.631000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.632000 audit: BPF prog-id=223 op=LOAD Dec 12 18:24:54.632000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffffd0bbc0 a2=94 a3=2 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.632000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.632000 audit: BPF prog-id=223 op=UNLOAD Dec 12 18:24:54.632000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffffd0bbf0 a2=0 a3=7fffffd0bcf0 items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.632000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.633000 audit: BPF prog-id=222 op=UNLOAD Dec 12 18:24:54.633000 audit[4409]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3c9c6d10 a2=0 a3=11e904e7a87cd13f items=0 ppid=4364 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.633000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:24:54.644000 audit: BPF prog-id=224 op=LOAD Dec 12 18:24:54.644000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe689fae30 a2=98 a3=1999999999999999 items=0 ppid=4364 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.644000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:24:54.644000 audit: BPF prog-id=224 op=UNLOAD Dec 12 18:24:54.644000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe689fae00 a3=0 items=0 ppid=4364 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.644000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:24:54.644000 audit: BPF prog-id=225 op=LOAD Dec 12 18:24:54.644000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe689fad10 a2=94 a3=ffff items=0 ppid=4364 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.644000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:24:54.644000 audit: BPF prog-id=225 op=UNLOAD Dec 12 18:24:54.644000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe689fad10 a2=94 a3=ffff items=0 ppid=4364 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.644000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:24:54.644000 audit: BPF prog-id=226 op=LOAD Dec 12 18:24:54.644000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe689fad50 a2=94 a3=7ffe689faf30 items=0 ppid=4364 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.644000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:24:54.644000 audit: BPF prog-id=226 op=UNLOAD Dec 12 18:24:54.644000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe689fad50 a2=94 a3=7ffe689faf30 items=0 ppid=4364 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.644000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:24:54.712789 systemd-networkd[1517]: vxlan.calico: Link UP Dec 12 18:24:54.715126 systemd-networkd[1517]: vxlan.calico: Gained carrier Dec 12 18:24:54.743000 audit: BPF prog-id=227 op=LOAD Dec 12 18:24:54.743000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdfc69d780 a2=98 a3=0 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.743000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.743000 audit: BPF prog-id=227 op=UNLOAD Dec 12 18:24:54.743000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdfc69d750 a3=0 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.743000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.743000 audit: BPF prog-id=228 op=LOAD Dec 12 18:24:54.743000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdfc69d590 a2=94 a3=54428f items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.743000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.743000 audit: BPF prog-id=228 op=UNLOAD Dec 12 18:24:54.743000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdfc69d590 a2=94 a3=54428f items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.743000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.743000 audit: BPF prog-id=229 op=LOAD Dec 12 18:24:54.743000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdfc69d5c0 a2=94 a3=2 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.743000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.743000 audit: BPF prog-id=229 op=UNLOAD Dec 12 18:24:54.743000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdfc69d5c0 a2=0 a3=2 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.743000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.743000 audit: BPF prog-id=230 op=LOAD Dec 12 18:24:54.743000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdfc69d370 a2=94 a3=4 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.743000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.744000 audit: BPF prog-id=230 op=UNLOAD Dec 12 18:24:54.744000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdfc69d370 a2=94 a3=4 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.744000 audit: BPF prog-id=231 op=LOAD Dec 12 18:24:54.744000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdfc69d470 a2=94 a3=7ffdfc69d5f0 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.744000 audit: BPF prog-id=231 op=UNLOAD Dec 12 18:24:54.744000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdfc69d470 a2=0 a3=7ffdfc69d5f0 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.744000 audit: BPF prog-id=232 op=LOAD Dec 12 18:24:54.744000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdfc69cba0 a2=94 a3=2 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.744000 audit: BPF prog-id=232 op=UNLOAD Dec 12 18:24:54.744000 audit[4465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdfc69cba0 a2=0 a3=2 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.744000 audit: BPF prog-id=233 op=LOAD Dec 12 18:24:54.744000 audit[4465]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdfc69cca0 a2=94 a3=30 items=0 ppid=4364 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.744000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:24:54.754000 audit: BPF prog-id=234 op=LOAD Dec 12 18:24:54.754000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd934d1630 a2=98 a3=0 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.754000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.754000 audit: BPF prog-id=234 op=UNLOAD Dec 12 18:24:54.754000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd934d1600 a3=0 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.754000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.754000 audit: BPF prog-id=235 op=LOAD Dec 12 18:24:54.754000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd934d1420 a2=94 a3=54428f items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.754000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.754000 audit: BPF prog-id=235 op=UNLOAD Dec 12 18:24:54.754000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd934d1420 a2=94 a3=54428f items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.754000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.754000 audit: BPF prog-id=236 op=LOAD Dec 12 18:24:54.754000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd934d1450 a2=94 a3=2 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.754000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.754000 audit: BPF prog-id=236 op=UNLOAD Dec 12 18:24:54.754000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd934d1450 a2=0 a3=2 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.754000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.916124 systemd-networkd[1517]: cali77cb80178d0: Gained IPv6LL Dec 12 18:24:54.942000 audit: BPF prog-id=237 op=LOAD Dec 12 18:24:54.942000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd934d1310 a2=94 a3=1 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.942000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.942000 audit: BPF prog-id=237 op=UNLOAD Dec 12 18:24:54.942000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd934d1310 a2=94 a3=1 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.942000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.951000 audit: BPF prog-id=238 op=LOAD Dec 12 18:24:54.951000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd934d1300 a2=94 a3=4 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.951000 audit: BPF prog-id=238 op=UNLOAD Dec 12 18:24:54.951000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd934d1300 a2=0 a3=4 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.951000 audit: BPF prog-id=239 op=LOAD Dec 12 18:24:54.951000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd934d1160 a2=94 a3=5 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.951000 audit: BPF prog-id=239 op=UNLOAD Dec 12 18:24:54.951000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd934d1160 a2=0 a3=5 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.951000 audit: BPF prog-id=240 op=LOAD Dec 12 18:24:54.951000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd934d1380 a2=94 a3=6 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.951000 audit: BPF prog-id=240 op=UNLOAD Dec 12 18:24:54.951000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd934d1380 a2=0 a3=6 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.951000 audit: BPF prog-id=241 op=LOAD Dec 12 18:24:54.951000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd934d0b30 a2=94 a3=88 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.952000 audit: BPF prog-id=242 op=LOAD Dec 12 18:24:54.952000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd934d09b0 a2=94 a3=2 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.952000 audit: BPF prog-id=242 op=UNLOAD Dec 12 18:24:54.952000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd934d09e0 a2=0 a3=7ffd934d0ae0 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.952000 audit: BPF prog-id=241 op=UNLOAD Dec 12 18:24:54.952000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2087bd10 a2=0 a3=5dc789f507657458 items=0 ppid=4364 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:24:54.967000 audit: BPF prog-id=233 op=UNLOAD Dec 12 18:24:54.967000 audit[4364]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0006c9700 a2=0 a3=0 items=0 ppid=3938 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:54.967000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 18:24:55.008000 audit[4487]: NETFILTER_CFG table=filter:119 family=2 entries=17 op=nft_register_rule pid=4487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:55.008000 audit[4487]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff2ef8fc0 a2=0 a3=7ffff2ef8fac items=0 ppid=2925 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:55.016000 audit[4487]: NETFILTER_CFG table=nat:120 family=2 entries=35 op=nft_register_chain pid=4487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:55.016000 audit[4487]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffff2ef8fc0 a2=0 a3=7ffff2ef8fac items=0 ppid=2925 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:55.048000 audit[4500]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4500 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:24:55.048000 audit[4500]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd2cc4f840 a2=0 a3=7ffd2cc4f82c items=0 ppid=4364 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.048000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:24:55.050000 audit[4501]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4501 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:24:55.050000 audit[4501]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe748188e0 a2=0 a3=7ffe748188cc items=0 ppid=4364 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.050000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:24:55.057000 audit[4499]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4499 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:24:55.057000 audit[4499]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe14b47c10 a2=0 a3=7ffe14b47bfc items=0 ppid=4364 pid=4499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.057000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:24:55.063000 audit[4503]: NETFILTER_CFG table=filter:124 family=2 entries=198 op=nft_register_chain pid=4503 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:24:55.063000 audit[4503]: SYSCALL arch=c000003e syscall=46 success=yes exit=114752 a0=3 a1=7fff918a6c80 a2=0 a3=7fff918a6c6c items=0 ppid=4364 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.063000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:24:55.364426 systemd-networkd[1517]: cali5f0861f39ee: Gained IPv6LL Dec 12 18:24:55.394548 containerd[1630]: time="2025-12-12T18:24:55.394509778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49f44685-gclmc,Uid:c1ce557f-fee1-488f-bf03-0d09f4a1964c,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:24:55.521338 systemd-networkd[1517]: cali81b9b12faf1: Link UP Dec 12 18:24:55.522457 systemd-networkd[1517]: cali81b9b12faf1: Gained carrier Dec 12 18:24:55.535073 kubelet[2814]: I1212 18:24:55.534421 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dt25m" podStartSLOduration=31.534406743 podStartE2EDuration="31.534406743s" podCreationTimestamp="2025-12-12 18:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:24:54.58669958 +0000 UTC m=+37.288936084" watchObservedRunningTime="2025-12-12 18:24:55.534406743 +0000 UTC m=+38.236643237" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.445 [INFO][4512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0 calico-apiserver-5d49f44685- calico-apiserver c1ce557f-fee1-488f-bf03-0d09f4a1964c 865 0 2025-12-12 18:24:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d49f44685 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-207-166 calico-apiserver-5d49f44685-gclmc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali81b9b12faf1 [] [] }} ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-gclmc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.446 [INFO][4512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-gclmc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.475 [INFO][4525] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" HandleID="k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Workload="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.476 [INFO][4525] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" HandleID="k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Workload="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-207-166", "pod":"calico-apiserver-5d49f44685-gclmc", "timestamp":"2025-12-12 18:24:55.475971385 +0000 UTC"}, Hostname:"172-234-207-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.476 [INFO][4525] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.476 [INFO][4525] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.476 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-207-166' Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.484 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.490 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.494 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.498 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.500 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.500 [INFO][4525] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.501 [INFO][4525] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.506 [INFO][4525] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.512 [INFO][4525] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.197/26] block=192.168.125.192/26 handle="k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.512 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.197/26] handle="k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" host="172-234-207-166" Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.512 [INFO][4525] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:24:55.537302 containerd[1630]: 2025-12-12 18:24:55.512 [INFO][4525] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.197/26] IPv6=[] ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" HandleID="k8s-pod-network.d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Workload="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" Dec 12 18:24:55.539197 containerd[1630]: 2025-12-12 18:24:55.517 [INFO][4512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-gclmc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0", GenerateName:"calico-apiserver-5d49f44685-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1ce557f-fee1-488f-bf03-0d09f4a1964c", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d49f44685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"", Pod:"calico-apiserver-5d49f44685-gclmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81b9b12faf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:55.539197 containerd[1630]: 2025-12-12 18:24:55.517 [INFO][4512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.197/32] ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-gclmc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" Dec 12 18:24:55.539197 containerd[1630]: 2025-12-12 18:24:55.517 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81b9b12faf1 ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-gclmc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" Dec 12 18:24:55.539197 containerd[1630]: 2025-12-12 18:24:55.522 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-gclmc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" Dec 12 18:24:55.539197 containerd[1630]: 2025-12-12 18:24:55.523 [INFO][4512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-gclmc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0", GenerateName:"calico-apiserver-5d49f44685-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1ce557f-fee1-488f-bf03-0d09f4a1964c", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d49f44685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e", Pod:"calico-apiserver-5d49f44685-gclmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81b9b12faf1", MAC:"02:d6:d8:7b:24:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:55.539197 containerd[1630]: 2025-12-12 18:24:55.533 [INFO][4512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" Namespace="calico-apiserver" Pod="calico-apiserver-5d49f44685-gclmc" WorkloadEndpoint="172--234--207--166-k8s-calico--apiserver--5d49f44685--gclmc-eth0" Dec 12 18:24:55.556000 audit[4541]: NETFILTER_CFG table=filter:125 family=2 entries=49 op=nft_register_chain pid=4541 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:24:55.556000 audit[4541]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffdca4761c0 a2=0 a3=7ffdca4761ac items=0 ppid=4364 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.556000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:24:55.565248 kubelet[2814]: E1212 18:24:55.562973 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:55.570239 kubelet[2814]: E1212 18:24:55.570100 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:24:55.570239 kubelet[2814]: E1212 18:24:55.570197 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:24:55.576233 containerd[1630]: time="2025-12-12T18:24:55.576188992Z" level=info msg="connecting to shim d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e" address="unix:///run/containerd/s/c58ae56e0e463d21844a3fbd3280d440a091edb42655c40065b153caf2230142" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:55.626122 systemd[1]: Started cri-containerd-d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e.scope - libcontainer container d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e. Dec 12 18:24:55.643000 audit: BPF prog-id=243 op=LOAD Dec 12 18:24:55.644000 audit: BPF prog-id=244 op=LOAD Dec 12 18:24:55.644000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4551 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439363563643265303530303733356164663733643738396434313565 Dec 12 18:24:55.644000 audit: BPF prog-id=244 op=UNLOAD Dec 12 18:24:55.644000 audit[4562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4551 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439363563643265303530303733356164663733643738396434313565 Dec 12 18:24:55.644000 audit: BPF prog-id=245 op=LOAD Dec 12 18:24:55.644000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4551 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439363563643265303530303733356164663733643738396434313565 Dec 12 18:24:55.644000 audit: BPF prog-id=246 op=LOAD Dec 12 18:24:55.644000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4551 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439363563643265303530303733356164663733643738396434313565 Dec 12 18:24:55.645000 audit: BPF prog-id=246 op=UNLOAD Dec 12 18:24:55.645000 audit[4562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4551 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439363563643265303530303733356164663733643738396434313565 Dec 12 18:24:55.645000 audit: BPF prog-id=245 op=UNLOAD Dec 12 18:24:55.645000 audit[4562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4551 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439363563643265303530303733356164663733643738396434313565 Dec 12 18:24:55.645000 audit: BPF prog-id=247 op=LOAD Dec 12 18:24:55.645000 audit[4562]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4551 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:55.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439363563643265303530303733356164663733643738396434313565 Dec 12 18:24:55.687882 containerd[1630]: time="2025-12-12T18:24:55.687793749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d49f44685-gclmc,Uid:c1ce557f-fee1-488f-bf03-0d09f4a1964c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d965cd2e0500735adf73d789d415e414a49832f6bc04c910ed24ec17ed63ed7e\"" Dec 12 18:24:55.691637 containerd[1630]: time="2025-12-12T18:24:55.690852238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:24:55.748518 systemd-networkd[1517]: calib3a7f50e26e: Gained IPv6LL Dec 12 18:24:55.843890 containerd[1630]: time="2025-12-12T18:24:55.843809604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:24:55.844867 containerd[1630]: time="2025-12-12T18:24:55.844743633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:24:55.844867 containerd[1630]: time="2025-12-12T18:24:55.844763693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:55.845160 kubelet[2814]: E1212 18:24:55.845107 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:24:55.845252 kubelet[2814]: E1212 18:24:55.845176 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:24:55.845669 kubelet[2814]: E1212 18:24:55.845308 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-gclmc_calico-apiserver(c1ce557f-fee1-488f-bf03-0d09f4a1964c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:24:55.845669 kubelet[2814]: E1212 18:24:55.845365 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:24:55.876530 systemd-networkd[1517]: vxlan.calico: Gained IPv6LL Dec 12 18:24:56.394739 containerd[1630]: time="2025-12-12T18:24:56.394675789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dl6p8,Uid:a8fbc410-f738-4c23-8813-68d1a7480f15,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:56.396349 kubelet[2814]: E1212 18:24:56.396306 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:56.400040 containerd[1630]: time="2025-12-12T18:24:56.396958508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pxm79,Uid:54fb9877-a362-4f5b-ba54-73070bbdb7a5,Namespace:kube-system,Attempt:0,}" Dec 12 18:24:56.551354 systemd-networkd[1517]: calie082098600b: Link UP Dec 12 18:24:56.552333 systemd-networkd[1517]: calie082098600b: Gained carrier Dec 12 18:24:56.580027 kubelet[2814]: E1212 18:24:56.579101 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:56.580027 kubelet[2814]: E1212 18:24:56.579396 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.467 [INFO][4590] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0 goldmane-7c778bb748- calico-system a8fbc410-f738-4c23-8813-68d1a7480f15 867 0 2025-12-12 18:24:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-207-166 goldmane-7c778bb748-dl6p8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie082098600b [] [] }} ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Namespace="calico-system" Pod="goldmane-7c778bb748-dl6p8" WorkloadEndpoint="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.468 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Namespace="calico-system" Pod="goldmane-7c778bb748-dl6p8" WorkloadEndpoint="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.512 [INFO][4615] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" HandleID="k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Workload="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.512 [INFO][4615] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" HandleID="k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Workload="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000399e90), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-207-166", "pod":"goldmane-7c778bb748-dl6p8", "timestamp":"2025-12-12 18:24:56.512595789 +0000 UTC"}, Hostname:"172-234-207-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.513 [INFO][4615] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.513 [INFO][4615] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.513 [INFO][4615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-207-166' Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.521 [INFO][4615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.526 [INFO][4615] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.530 [INFO][4615] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.532 [INFO][4615] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.534 [INFO][4615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.534 [INFO][4615] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.535 [INFO][4615] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.539 [INFO][4615] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.544 [INFO][4615] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.198/26] block=192.168.125.192/26 handle="k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.544 [INFO][4615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.198/26] handle="k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" host="172-234-207-166" Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.544 [INFO][4615] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:24:56.580736 containerd[1630]: 2025-12-12 18:24:56.544 [INFO][4615] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.198/26] IPv6=[] ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" HandleID="k8s-pod-network.ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Workload="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" Dec 12 18:24:56.581282 containerd[1630]: 2025-12-12 18:24:56.548 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Namespace="calico-system" Pod="goldmane-7c778bb748-dl6p8" WorkloadEndpoint="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a8fbc410-f738-4c23-8813-68d1a7480f15", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"", Pod:"goldmane-7c778bb748-dl6p8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie082098600b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:56.581282 containerd[1630]: 2025-12-12 18:24:56.548 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.198/32] ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Namespace="calico-system" Pod="goldmane-7c778bb748-dl6p8" WorkloadEndpoint="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" Dec 12 18:24:56.581282 containerd[1630]: 2025-12-12 18:24:56.548 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie082098600b ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Namespace="calico-system" Pod="goldmane-7c778bb748-dl6p8" WorkloadEndpoint="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" Dec 12 18:24:56.581282 containerd[1630]: 2025-12-12 18:24:56.552 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Namespace="calico-system" Pod="goldmane-7c778bb748-dl6p8" WorkloadEndpoint="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" Dec 12 18:24:56.581282 containerd[1630]: 2025-12-12 18:24:56.553 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Namespace="calico-system" Pod="goldmane-7c778bb748-dl6p8" WorkloadEndpoint="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"a8fbc410-f738-4c23-8813-68d1a7480f15", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a", Pod:"goldmane-7c778bb748-dl6p8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie082098600b", MAC:"be:df:20:0e:34:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:56.581282 containerd[1630]: 2025-12-12 18:24:56.574 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" Namespace="calico-system" Pod="goldmane-7c778bb748-dl6p8" WorkloadEndpoint="172--234--207--166-k8s-goldmane--7c778bb748--dl6p8-eth0" Dec 12 18:24:56.622028 containerd[1630]: time="2025-12-12T18:24:56.621786178Z" level=info msg="connecting to shim ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a" address="unix:///run/containerd/s/429c914bc1e116c755a86f6cc08973033047d736a2f7acc64537adb3dbd9ad9b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:56.660000 audit[4659]: NETFILTER_CFG table=filter:126 family=2 entries=14 op=nft_register_rule pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:56.660000 audit[4659]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8f206130 a2=0 a3=7ffc8f20611c items=0 ppid=2925 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:56.664000 audit[4659]: NETFILTER_CFG table=nat:127 family=2 entries=20 op=nft_register_rule pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:56.664000 audit[4659]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc8f206130 a2=0 a3=7ffc8f20611c items=0 ppid=2925 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:56.691157 systemd[1]: Started cri-containerd-ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a.scope - libcontainer container ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a. Dec 12 18:24:56.711000 audit: BPF prog-id=248 op=LOAD Dec 12 18:24:56.711000 audit: BPF prog-id=249 op=LOAD Dec 12 18:24:56.711000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4644 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565323435323265393630396363323939656466653235393138306431 Dec 12 18:24:56.711000 audit: BPF prog-id=249 op=UNLOAD Dec 12 18:24:56.711000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4644 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565323435323265393630396363323939656466653235393138306431 Dec 12 18:24:56.711000 audit: BPF prog-id=250 op=LOAD Dec 12 18:24:56.711000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4644 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565323435323265393630396363323939656466653235393138306431 Dec 12 18:24:56.711000 audit: BPF prog-id=251 op=LOAD Dec 12 18:24:56.711000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4644 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565323435323265393630396363323939656466653235393138306431 Dec 12 18:24:56.712000 audit: BPF prog-id=251 op=UNLOAD Dec 12 18:24:56.712000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4644 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565323435323265393630396363323939656466653235393138306431 Dec 12 18:24:56.712000 audit: BPF prog-id=250 op=UNLOAD Dec 12 18:24:56.712000 audit[4658]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4644 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565323435323265393630396363323939656466653235393138306431 Dec 12 18:24:56.712000 audit: BPF prog-id=252 op=LOAD Dec 12 18:24:56.712000 audit[4658]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4644 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565323435323265393630396363323939656466653235393138306431 Dec 12 18:24:56.733412 systemd-networkd[1517]: cali81dbcd7a977: Link UP Dec 12 18:24:56.733800 systemd-networkd[1517]: cali81dbcd7a977: Gained carrier Dec 12 18:24:56.747000 audit[4679]: NETFILTER_CFG table=filter:128 family=2 entries=60 op=nft_register_chain pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:24:56.747000 audit[4679]: SYSCALL arch=c000003e syscall=46 success=yes exit=29932 a0=3 a1=7fff5f5ca1e0 a2=0 a3=7fff5f5ca1cc items=0 ppid=4364 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.747000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.471 [INFO][4597] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0 coredns-66bc5c9577- kube-system 54fb9877-a362-4f5b-ba54-73070bbdb7a5 866 0 2025-12-12 18:24:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-207-166 coredns-66bc5c9577-pxm79 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81dbcd7a977 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Namespace="kube-system" Pod="coredns-66bc5c9577-pxm79" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.471 [INFO][4597] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Namespace="kube-system" Pod="coredns-66bc5c9577-pxm79" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.512 [INFO][4619] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" HandleID="k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Workload="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.514 [INFO][4619] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" HandleID="k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Workload="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5910), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-207-166", "pod":"coredns-66bc5c9577-pxm79", "timestamp":"2025-12-12 18:24:56.512858689 +0000 UTC"}, Hostname:"172-234-207-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.514 [INFO][4619] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.544 [INFO][4619] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.545 [INFO][4619] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-207-166' Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.624 [INFO][4619] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.643 [INFO][4619] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.658 [INFO][4619] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.667 [INFO][4619] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.671 [INFO][4619] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.672 [INFO][4619] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.674 [INFO][4619] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.679 [INFO][4619] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.696 [INFO][4619] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.199/26] block=192.168.125.192/26 handle="k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.697 [INFO][4619] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.199/26] handle="k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" host="172-234-207-166" Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.698 [INFO][4619] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:24:56.757480 containerd[1630]: 2025-12-12 18:24:56.699 [INFO][4619] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.199/26] IPv6=[] ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" HandleID="k8s-pod-network.a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Workload="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" Dec 12 18:24:56.757970 containerd[1630]: 2025-12-12 18:24:56.712 [INFO][4597] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Namespace="kube-system" Pod="coredns-66bc5c9577-pxm79" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"54fb9877-a362-4f5b-ba54-73070bbdb7a5", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"", Pod:"coredns-66bc5c9577-pxm79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81dbcd7a977", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:56.757970 containerd[1630]: 2025-12-12 18:24:56.714 [INFO][4597] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.199/32] ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Namespace="kube-system" Pod="coredns-66bc5c9577-pxm79" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" Dec 12 18:24:56.757970 containerd[1630]: 2025-12-12 18:24:56.714 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81dbcd7a977 ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Namespace="kube-system" Pod="coredns-66bc5c9577-pxm79" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" Dec 12 18:24:56.757970 containerd[1630]: 2025-12-12 18:24:56.720 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Namespace="kube-system" Pod="coredns-66bc5c9577-pxm79" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" Dec 12 18:24:56.757970 containerd[1630]: 2025-12-12 18:24:56.722 [INFO][4597] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Namespace="kube-system" Pod="coredns-66bc5c9577-pxm79" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"54fb9877-a362-4f5b-ba54-73070bbdb7a5", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a", Pod:"coredns-66bc5c9577-pxm79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81dbcd7a977", MAC:"0a:f4:79:3d:d6:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:56.757970 containerd[1630]: 2025-12-12 18:24:56.743 [INFO][4597] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" Namespace="kube-system" Pod="coredns-66bc5c9577-pxm79" WorkloadEndpoint="172--234--207--166-k8s-coredns--66bc5c9577--pxm79-eth0" Dec 12 18:24:56.799000 audit[4694]: NETFILTER_CFG table=filter:129 family=2 entries=58 op=nft_register_chain pid=4694 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:24:56.799000 audit[4694]: SYSCALL arch=c000003e syscall=46 success=yes exit=26760 a0=3 a1=7ffdbbc1fca0 a2=0 a3=7ffdbbc1fc8c items=0 ppid=4364 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.799000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:24:56.809635 containerd[1630]: time="2025-12-12T18:24:56.809531838Z" level=info msg="connecting to shim a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a" address="unix:///run/containerd/s/e24f95b7318d917d2e43213505b02e2034ef625f1d3ec17aa6076c6da2309e54" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:56.816716 containerd[1630]: time="2025-12-12T18:24:56.816662249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dl6p8,Uid:a8fbc410-f738-4c23-8813-68d1a7480f15,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee24522e9609cc299edfe259180d129f6f13735d887c13c90549e551f05f448a\"" Dec 12 18:24:56.819222 containerd[1630]: time="2025-12-12T18:24:56.819204499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:24:56.883472 systemd[1]: Started cri-containerd-a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a.scope - libcontainer container a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a. Dec 12 18:24:56.900234 systemd-networkd[1517]: cali81b9b12faf1: Gained IPv6LL Dec 12 18:24:56.962937 containerd[1630]: time="2025-12-12T18:24:56.962793599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:24:56.964249 containerd[1630]: time="2025-12-12T18:24:56.964040678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:24:56.964249 containerd[1630]: time="2025-12-12T18:24:56.964109968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:56.967011 kubelet[2814]: E1212 18:24:56.965252 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:24:56.967134 kubelet[2814]: E1212 18:24:56.967111 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:24:56.967831 kubelet[2814]: E1212 18:24:56.967809 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dl6p8_calico-system(a8fbc410-f738-4c23-8813-68d1a7480f15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:24:56.968350 kubelet[2814]: E1212 18:24:56.968311 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:24:56.968000 audit: BPF prog-id=253 op=LOAD Dec 12 18:24:56.969000 audit: BPF prog-id=254 op=LOAD Dec 12 18:24:56.969000 audit[4719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4708 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373035663437363736323832663964656266326336633166363131 Dec 12 18:24:56.969000 audit: BPF prog-id=254 op=UNLOAD Dec 12 18:24:56.969000 audit[4719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373035663437363736323832663964656266326336633166363131 Dec 12 18:24:56.970000 audit: BPF prog-id=255 op=LOAD Dec 12 18:24:56.970000 audit[4719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4708 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373035663437363736323832663964656266326336633166363131 Dec 12 18:24:56.970000 audit: BPF prog-id=256 op=LOAD Dec 12 18:24:56.970000 audit[4719]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4708 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373035663437363736323832663964656266326336633166363131 Dec 12 18:24:56.970000 audit: BPF prog-id=256 op=UNLOAD Dec 12 18:24:56.970000 audit[4719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373035663437363736323832663964656266326336633166363131 Dec 12 18:24:56.970000 audit: BPF prog-id=255 op=UNLOAD Dec 12 18:24:56.970000 audit[4719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373035663437363736323832663964656266326336633166363131 Dec 12 18:24:56.970000 audit: BPF prog-id=257 op=LOAD Dec 12 18:24:56.970000 audit[4719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4708 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:56.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373035663437363736323832663964656266326336633166363131 Dec 12 18:24:57.041793 containerd[1630]: time="2025-12-12T18:24:57.041731200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pxm79,Uid:54fb9877-a362-4f5b-ba54-73070bbdb7a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a\"" Dec 12 18:24:57.043492 kubelet[2814]: E1212 18:24:57.043447 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:57.049369 containerd[1630]: time="2025-12-12T18:24:57.049325730Z" level=info msg="CreateContainer within sandbox \"a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:24:57.061025 containerd[1630]: time="2025-12-12T18:24:57.060950461Z" level=info msg="Container f3c9135ce25f4f41758156dbc42455584d5b494255027449965a605c147d95b2: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:24:57.068452 containerd[1630]: time="2025-12-12T18:24:57.068350630Z" level=info msg="CreateContainer within sandbox \"a2705f47676282f9debf2c6c1f6118b36732b20bdbb7ff765c34a0dbd799e99a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3c9135ce25f4f41758156dbc42455584d5b494255027449965a605c147d95b2\"" Dec 12 18:24:57.069355 containerd[1630]: time="2025-12-12T18:24:57.069311171Z" level=info msg="StartContainer for \"f3c9135ce25f4f41758156dbc42455584d5b494255027449965a605c147d95b2\"" Dec 12 18:24:57.071918 containerd[1630]: time="2025-12-12T18:24:57.071820491Z" level=info msg="connecting to shim f3c9135ce25f4f41758156dbc42455584d5b494255027449965a605c147d95b2" address="unix:///run/containerd/s/e24f95b7318d917d2e43213505b02e2034ef625f1d3ec17aa6076c6da2309e54" protocol=ttrpc version=3 Dec 12 18:24:57.096467 systemd[1]: Started cri-containerd-f3c9135ce25f4f41758156dbc42455584d5b494255027449965a605c147d95b2.scope - libcontainer container f3c9135ce25f4f41758156dbc42455584d5b494255027449965a605c147d95b2. Dec 12 18:24:57.115000 audit: BPF prog-id=258 op=LOAD Dec 12 18:24:57.115000 audit: BPF prog-id=259 op=LOAD Dec 12 18:24:57.115000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4708 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633931333563653235663466343137353831353664626334323435 Dec 12 18:24:57.115000 audit: BPF prog-id=259 op=UNLOAD Dec 12 18:24:57.115000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633931333563653235663466343137353831353664626334323435 Dec 12 18:24:57.116000 audit: BPF prog-id=260 op=LOAD Dec 12 18:24:57.116000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4708 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633931333563653235663466343137353831353664626334323435 Dec 12 18:24:57.116000 audit: BPF prog-id=261 op=LOAD Dec 12 18:24:57.116000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4708 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633931333563653235663466343137353831353664626334323435 Dec 12 18:24:57.116000 audit: BPF prog-id=261 op=UNLOAD Dec 12 18:24:57.116000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633931333563653235663466343137353831353664626334323435 Dec 12 18:24:57.116000 audit: BPF prog-id=260 op=UNLOAD Dec 12 18:24:57.116000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4708 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633931333563653235663466343137353831353664626334323435 Dec 12 18:24:57.116000 audit: BPF prog-id=262 op=LOAD Dec 12 18:24:57.116000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4708 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633931333563653235663466343137353831353664626334323435 Dec 12 18:24:57.142456 containerd[1630]: time="2025-12-12T18:24:57.142400513Z" level=info msg="StartContainer for \"f3c9135ce25f4f41758156dbc42455584d5b494255027449965a605c147d95b2\" returns successfully" Dec 12 18:24:57.395525 containerd[1630]: time="2025-12-12T18:24:57.395211952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57947d7c9d-zl2bt,Uid:d4234e62-fee9-4e5f-91a6-36421f56e51b,Namespace:calico-system,Attempt:0,}" Dec 12 18:24:57.501360 systemd-networkd[1517]: cali32b5f5bc48a: Link UP Dec 12 18:24:57.502548 systemd-networkd[1517]: cali32b5f5bc48a: Gained carrier Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.426 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0 calico-kube-controllers-57947d7c9d- calico-system d4234e62-fee9-4e5f-91a6-36421f56e51b 860 0 2025-12-12 18:24:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57947d7c9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-207-166 calico-kube-controllers-57947d7c9d-zl2bt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali32b5f5bc48a [] [] }} ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Namespace="calico-system" Pod="calico-kube-controllers-57947d7c9d-zl2bt" WorkloadEndpoint="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.427 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Namespace="calico-system" Pod="calico-kube-controllers-57947d7c9d-zl2bt" WorkloadEndpoint="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.458 [INFO][4792] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" HandleID="k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Workload="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.458 [INFO][4792] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" HandleID="k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Workload="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac440), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-207-166", "pod":"calico-kube-controllers-57947d7c9d-zl2bt", "timestamp":"2025-12-12 18:24:57.458740494 +0000 UTC"}, Hostname:"172-234-207-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.459 [INFO][4792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.459 [INFO][4792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.459 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-207-166' Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.469 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.474 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.477 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.479 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.481 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.481 [INFO][4792] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.483 [INFO][4792] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433 Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.486 [INFO][4792] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.492 [INFO][4792] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.200/26] block=192.168.125.192/26 handle="k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.492 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.200/26] handle="k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" host="172-234-207-166" Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.492 [INFO][4792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:24:57.517225 containerd[1630]: 2025-12-12 18:24:57.492 [INFO][4792] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.200/26] IPv6=[] ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" HandleID="k8s-pod-network.0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Workload="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" Dec 12 18:24:57.518348 containerd[1630]: 2025-12-12 18:24:57.495 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Namespace="calico-system" Pod="calico-kube-controllers-57947d7c9d-zl2bt" WorkloadEndpoint="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0", GenerateName:"calico-kube-controllers-57947d7c9d-", Namespace:"calico-system", SelfLink:"", UID:"d4234e62-fee9-4e5f-91a6-36421f56e51b", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57947d7c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"", Pod:"calico-kube-controllers-57947d7c9d-zl2bt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32b5f5bc48a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:57.518348 containerd[1630]: 2025-12-12 18:24:57.495 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.200/32] ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Namespace="calico-system" Pod="calico-kube-controllers-57947d7c9d-zl2bt" WorkloadEndpoint="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" Dec 12 18:24:57.518348 containerd[1630]: 2025-12-12 18:24:57.495 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32b5f5bc48a ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Namespace="calico-system" Pod="calico-kube-controllers-57947d7c9d-zl2bt" WorkloadEndpoint="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" Dec 12 18:24:57.518348 containerd[1630]: 2025-12-12 18:24:57.502 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Namespace="calico-system" Pod="calico-kube-controllers-57947d7c9d-zl2bt" WorkloadEndpoint="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" Dec 12 18:24:57.518348 containerd[1630]: 2025-12-12 18:24:57.503 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Namespace="calico-system" Pod="calico-kube-controllers-57947d7c9d-zl2bt" WorkloadEndpoint="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0", GenerateName:"calico-kube-controllers-57947d7c9d-", Namespace:"calico-system", SelfLink:"", UID:"d4234e62-fee9-4e5f-91a6-36421f56e51b", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57947d7c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-207-166", ContainerID:"0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433", Pod:"calico-kube-controllers-57947d7c9d-zl2bt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32b5f5bc48a", MAC:"d6:bb:f4:1a:eb:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:24:57.518348 containerd[1630]: 2025-12-12 18:24:57.513 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" Namespace="calico-system" Pod="calico-kube-controllers-57947d7c9d-zl2bt" WorkloadEndpoint="172--234--207--166-k8s-calico--kube--controllers--57947d7c9d--zl2bt-eth0" Dec 12 18:24:57.540598 containerd[1630]: time="2025-12-12T18:24:57.540167676Z" level=info msg="connecting to shim 0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433" address="unix:///run/containerd/s/0a681c91a81f96e49cc8bc1d9f4af8cb0966ba731a73c43dd38981a2ea1b5522" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:24:57.546000 audit[4818]: NETFILTER_CFG table=filter:130 family=2 entries=56 op=nft_register_chain pid=4818 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:24:57.546000 audit[4818]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7ffeb09fd670 a2=0 a3=7ffeb09fd65c items=0 ppid=4364 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.546000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:24:57.571291 systemd[1]: Started cri-containerd-0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433.scope - libcontainer container 0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433. Dec 12 18:24:57.584037 kubelet[2814]: E1212 18:24:57.583820 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:24:57.589607 kubelet[2814]: E1212 18:24:57.589569 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:57.591000 audit: BPF prog-id=263 op=LOAD Dec 12 18:24:57.592000 audit: BPF prog-id=264 op=LOAD Dec 12 18:24:57.592000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4817 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062396138623639396466353863373163393337626262366366636563 Dec 12 18:24:57.594000 audit: BPF prog-id=264 op=UNLOAD Dec 12 18:24:57.594000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062396138623639396466353863373163393337626262366366636563 Dec 12 18:24:57.594000 audit: BPF prog-id=265 op=LOAD Dec 12 18:24:57.594000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4817 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062396138623639396466353863373163393337626262366366636563 Dec 12 18:24:57.594000 audit: BPF prog-id=266 op=LOAD Dec 12 18:24:57.594000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4817 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062396138623639396466353863373163393337626262366366636563 Dec 12 18:24:57.594000 audit: BPF prog-id=266 op=UNLOAD Dec 12 18:24:57.594000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062396138623639396466353863373163393337626262366366636563 Dec 12 18:24:57.594000 audit: BPF prog-id=265 op=UNLOAD Dec 12 18:24:57.594000 audit[4830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4817 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062396138623639396466353863373163393337626262366366636563 Dec 12 18:24:57.599495 kubelet[2814]: E1212 18:24:57.596060 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:24:57.595000 audit: BPF prog-id=267 op=LOAD Dec 12 18:24:57.595000 audit[4830]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4817 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062396138623639396466353863373163393337626262366366636563 Dec 12 18:24:57.601534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985462533.mount: Deactivated successfully. Dec 12 18:24:57.655429 kubelet[2814]: I1212 18:24:57.655275 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pxm79" podStartSLOduration=33.65525888 podStartE2EDuration="33.65525888s" podCreationTimestamp="2025-12-12 18:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:24:57.636521349 +0000 UTC m=+40.338757843" watchObservedRunningTime="2025-12-12 18:24:57.65525888 +0000 UTC m=+40.357495374" Dec 12 18:24:57.680549 containerd[1630]: time="2025-12-12T18:24:57.680451790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57947d7c9d-zl2bt,Uid:d4234e62-fee9-4e5f-91a6-36421f56e51b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b9a8b699df58c71c937bbb6cfcec4c20e607b6fbe9e7c02bdc7dc1c9cff9433\"" Dec 12 18:24:57.681000 audit[4855]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:57.681000 audit[4855]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde4ba70f0 a2=0 a3=7ffde4ba70dc items=0 ppid=2925 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:57.683832 containerd[1630]: time="2025-12-12T18:24:57.683769860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:24:57.695000 audit[4855]: NETFILTER_CFG table=nat:132 family=2 entries=56 op=nft_register_chain pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:24:57.695000 audit[4855]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffde4ba70f0 a2=0 a3=7ffde4ba70dc items=0 ppid=2925 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:24:57.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:24:57.824633 containerd[1630]: time="2025-12-12T18:24:57.824568855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:24:57.825567 containerd[1630]: time="2025-12-12T18:24:57.825516845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:24:57.825672 containerd[1630]: time="2025-12-12T18:24:57.825585545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:24:57.825755 kubelet[2814]: E1212 18:24:57.825715 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:24:57.825841 kubelet[2814]: E1212 18:24:57.825760 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:24:57.826105 kubelet[2814]: E1212 18:24:57.825848 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57947d7c9d-zl2bt_calico-system(d4234e62-fee9-4e5f-91a6-36421f56e51b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:24:57.826105 kubelet[2814]: E1212 18:24:57.825879 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:24:58.244625 systemd-networkd[1517]: cali81dbcd7a977: Gained IPv6LL Dec 12 18:24:58.245772 systemd-networkd[1517]: calie082098600b: Gained IPv6LL Dec 12 18:24:58.594017 kubelet[2814]: E1212 18:24:58.593970 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:58.595347 kubelet[2814]: E1212 18:24:58.594820 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:24:58.595347 kubelet[2814]: E1212 18:24:58.594858 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:24:59.012186 systemd-networkd[1517]: cali32b5f5bc48a: Gained IPv6LL Dec 12 18:24:59.595062 kubelet[2814]: E1212 18:24:59.594858 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:24:59.596657 kubelet[2814]: E1212 18:24:59.596607 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:25:00.394041 containerd[1630]: time="2025-12-12T18:25:00.393928614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:25:00.525669 containerd[1630]: time="2025-12-12T18:25:00.525600260Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:00.526654 containerd[1630]: time="2025-12-12T18:25:00.526574079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:25:00.526930 containerd[1630]: time="2025-12-12T18:25:00.526667339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:00.527045 kubelet[2814]: E1212 18:25:00.526804 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:25:00.527045 kubelet[2814]: E1212 18:25:00.526841 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:25:00.527045 kubelet[2814]: E1212 18:25:00.526914 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:00.529129 containerd[1630]: time="2025-12-12T18:25:00.528909860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:25:00.655066 containerd[1630]: time="2025-12-12T18:25:00.654875863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:00.656417 containerd[1630]: time="2025-12-12T18:25:00.656354503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:25:00.656711 containerd[1630]: time="2025-12-12T18:25:00.656384783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:00.657002 kubelet[2814]: E1212 18:25:00.656887 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:25:00.657002 kubelet[2814]: E1212 18:25:00.656937 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:25:00.657793 kubelet[2814]: E1212 18:25:00.657045 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:00.657793 kubelet[2814]: E1212 18:25:00.657084 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:25:05.076038 kubelet[2814]: I1212 18:25:05.075861 2814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:25:05.076927 kubelet[2814]: E1212 18:25:05.076899 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:25:05.610719 kubelet[2814]: E1212 18:25:05.610665 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:25:06.394326 containerd[1630]: time="2025-12-12T18:25:06.394234941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:25:06.525396 containerd[1630]: time="2025-12-12T18:25:06.525285082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:06.527471 containerd[1630]: time="2025-12-12T18:25:06.526801342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:25:06.528193 containerd[1630]: time="2025-12-12T18:25:06.526829022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:06.528420 kubelet[2814]: E1212 18:25:06.528390 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:25:06.533569 kubelet[2814]: E1212 18:25:06.531765 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:25:06.533569 kubelet[2814]: E1212 18:25:06.531888 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:06.533706 containerd[1630]: time="2025-12-12T18:25:06.533062904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:25:06.668072 containerd[1630]: time="2025-12-12T18:25:06.667638326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:06.669048 containerd[1630]: time="2025-12-12T18:25:06.669001375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:06.669160 containerd[1630]: time="2025-12-12T18:25:06.669031315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:25:06.669440 kubelet[2814]: E1212 18:25:06.669392 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:25:06.669510 kubelet[2814]: E1212 18:25:06.669444 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:25:06.669603 kubelet[2814]: E1212 18:25:06.669575 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:06.670116 kubelet[2814]: E1212 18:25:06.669945 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:25:08.395664 containerd[1630]: time="2025-12-12T18:25:08.395534505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:25:08.556132 containerd[1630]: time="2025-12-12T18:25:08.556072968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:08.557268 containerd[1630]: time="2025-12-12T18:25:08.557160619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:25:08.557268 containerd[1630]: time="2025-12-12T18:25:08.557240399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:08.557656 kubelet[2814]: E1212 18:25:08.557586 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:25:08.558540 kubelet[2814]: E1212 18:25:08.557641 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:25:08.558540 kubelet[2814]: E1212 18:25:08.558242 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-gclmc_calico-apiserver(c1ce557f-fee1-488f-bf03-0d09f4a1964c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:08.558540 kubelet[2814]: E1212 18:25:08.558272 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:25:08.559081 containerd[1630]: time="2025-12-12T18:25:08.559018329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:25:08.686238 containerd[1630]: time="2025-12-12T18:25:08.686093463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:08.687301 containerd[1630]: time="2025-12-12T18:25:08.687245274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:25:08.687600 containerd[1630]: time="2025-12-12T18:25:08.687456204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:08.687964 kubelet[2814]: E1212 18:25:08.687923 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:25:08.688126 kubelet[2814]: E1212 18:25:08.688076 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:25:08.689067 kubelet[2814]: E1212 18:25:08.688971 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-wf2rc_calico-apiserver(3f177278-7ed2-426c-a0d0-27da05aa7f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:08.689067 kubelet[2814]: E1212 18:25:08.689039 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:25:13.396787 containerd[1630]: time="2025-12-12T18:25:13.395887764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:25:13.524022 containerd[1630]: time="2025-12-12T18:25:13.523763076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:13.524972 containerd[1630]: time="2025-12-12T18:25:13.524924607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:25:13.525207 containerd[1630]: time="2025-12-12T18:25:13.525117267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:13.525753 kubelet[2814]: E1212 18:25:13.525635 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:25:13.525753 kubelet[2814]: E1212 18:25:13.525718 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:25:13.526690 kubelet[2814]: E1212 18:25:13.526262 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dl6p8_calico-system(a8fbc410-f738-4c23-8813-68d1a7480f15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:13.527312 kubelet[2814]: E1212 18:25:13.526297 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:25:14.393062 containerd[1630]: time="2025-12-12T18:25:14.392950960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:25:14.522169 containerd[1630]: time="2025-12-12T18:25:14.522122524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:14.523259 containerd[1630]: time="2025-12-12T18:25:14.523224934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:25:14.523349 containerd[1630]: time="2025-12-12T18:25:14.523315774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:14.523562 kubelet[2814]: E1212 18:25:14.523499 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:25:14.523562 kubelet[2814]: E1212 18:25:14.523542 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:25:14.523714 kubelet[2814]: E1212 18:25:14.523612 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57947d7c9d-zl2bt_calico-system(d4234e62-fee9-4e5f-91a6-36421f56e51b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:14.523714 kubelet[2814]: E1212 18:25:14.523643 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:25:15.399377 kubelet[2814]: E1212 18:25:15.399300 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:25:18.396660 kubelet[2814]: E1212 18:25:18.396546 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:25:22.396132 kubelet[2814]: E1212 18:25:22.395932 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:25:22.400269 kubelet[2814]: E1212 18:25:22.398147 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:25:24.393006 kubelet[2814]: E1212 18:25:24.392780 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:25:27.395647 kubelet[2814]: E1212 18:25:27.395193 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:25:28.398841 kubelet[2814]: E1212 18:25:28.398798 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:25:30.392209 kubelet[2814]: E1212 18:25:30.392169 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:25:30.397004 containerd[1630]: time="2025-12-12T18:25:30.396225792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:25:30.551300 containerd[1630]: time="2025-12-12T18:25:30.551255656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:30.552320 containerd[1630]: time="2025-12-12T18:25:30.552284976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:25:30.552390 containerd[1630]: time="2025-12-12T18:25:30.552301025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:30.552586 kubelet[2814]: E1212 18:25:30.552531 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:25:30.552586 kubelet[2814]: E1212 18:25:30.552578 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:25:30.552681 kubelet[2814]: E1212 18:25:30.552663 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:30.554086 containerd[1630]: time="2025-12-12T18:25:30.554014657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:25:30.694320 containerd[1630]: time="2025-12-12T18:25:30.693462662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:30.695186 containerd[1630]: time="2025-12-12T18:25:30.695135744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:25:30.695186 containerd[1630]: time="2025-12-12T18:25:30.695161004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:30.695656 kubelet[2814]: E1212 18:25:30.695363 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:25:30.695656 kubelet[2814]: E1212 18:25:30.695413 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:25:30.695656 kubelet[2814]: E1212 18:25:30.695495 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:30.695656 kubelet[2814]: E1212 18:25:30.695534 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:25:32.393433 containerd[1630]: time="2025-12-12T18:25:32.393336978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:25:32.530528 containerd[1630]: time="2025-12-12T18:25:32.530403774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:32.531738 containerd[1630]: time="2025-12-12T18:25:32.531649831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:25:32.531917 containerd[1630]: time="2025-12-12T18:25:32.531654561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:32.532083 kubelet[2814]: E1212 18:25:32.532003 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:25:32.532596 kubelet[2814]: E1212 18:25:32.532107 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:25:32.532596 kubelet[2814]: E1212 18:25:32.532210 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:32.535108 containerd[1630]: time="2025-12-12T18:25:32.535056356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:25:32.663663 containerd[1630]: time="2025-12-12T18:25:32.663191095Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:32.664226 containerd[1630]: time="2025-12-12T18:25:32.664195514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:25:32.664372 containerd[1630]: time="2025-12-12T18:25:32.664282723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:32.664541 kubelet[2814]: E1212 18:25:32.664465 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:25:32.664611 kubelet[2814]: E1212 18:25:32.664551 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:25:32.664677 kubelet[2814]: E1212 18:25:32.664652 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:32.664728 kubelet[2814]: E1212 18:25:32.664703 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:25:34.395737 containerd[1630]: time="2025-12-12T18:25:34.395675493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:25:34.524580 containerd[1630]: time="2025-12-12T18:25:34.524350932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:34.525590 containerd[1630]: time="2025-12-12T18:25:34.525462760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:25:34.525590 containerd[1630]: time="2025-12-12T18:25:34.525564539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:34.525906 kubelet[2814]: E1212 18:25:34.525859 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:25:34.526998 kubelet[2814]: E1212 18:25:34.526349 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:25:34.526998 kubelet[2814]: E1212 18:25:34.526563 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-wf2rc_calico-apiserver(3f177278-7ed2-426c-a0d0-27da05aa7f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:34.526998 kubelet[2814]: E1212 18:25:34.526604 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:25:34.527832 containerd[1630]: time="2025-12-12T18:25:34.527455121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:25:34.674089 containerd[1630]: time="2025-12-12T18:25:34.673924444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:34.675600 containerd[1630]: time="2025-12-12T18:25:34.675562118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:25:34.675786 containerd[1630]: time="2025-12-12T18:25:34.675670647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:34.676663 kubelet[2814]: E1212 18:25:34.676129 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:25:34.676663 kubelet[2814]: E1212 18:25:34.676184 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:25:34.676663 kubelet[2814]: E1212 18:25:34.676264 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-gclmc_calico-apiserver(c1ce557f-fee1-488f-bf03-0d09f4a1964c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:34.676663 kubelet[2814]: E1212 18:25:34.676298 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:25:38.414939 containerd[1630]: time="2025-12-12T18:25:38.414877942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:25:38.568517 containerd[1630]: time="2025-12-12T18:25:38.568274087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:38.572198 containerd[1630]: time="2025-12-12T18:25:38.572118484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:38.572466 containerd[1630]: time="2025-12-12T18:25:38.572335982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:25:38.572928 kubelet[2814]: E1212 18:25:38.572820 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:25:38.572928 kubelet[2814]: E1212 18:25:38.572903 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:25:38.575261 kubelet[2814]: E1212 18:25:38.575171 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57947d7c9d-zl2bt_calico-system(d4234e62-fee9-4e5f-91a6-36421f56e51b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:38.575368 kubelet[2814]: E1212 18:25:38.575346 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:25:39.395490 containerd[1630]: time="2025-12-12T18:25:39.395095340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:25:39.520275 containerd[1630]: time="2025-12-12T18:25:39.520193355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:25:39.521085 containerd[1630]: time="2025-12-12T18:25:39.521050698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:25:39.521227 containerd[1630]: time="2025-12-12T18:25:39.521163037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:25:39.521423 kubelet[2814]: E1212 18:25:39.521384 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:25:39.521533 kubelet[2814]: E1212 18:25:39.521432 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:25:39.521533 kubelet[2814]: E1212 18:25:39.521520 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dl6p8_calico-system(a8fbc410-f738-4c23-8813-68d1a7480f15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:25:39.521598 kubelet[2814]: E1212 18:25:39.521556 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:25:44.392689 kubelet[2814]: E1212 18:25:44.392171 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:25:44.396439 kubelet[2814]: E1212 18:25:44.396359 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:25:46.392245 kubelet[2814]: E1212 18:25:46.392160 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:25:47.394188 kubelet[2814]: E1212 18:25:47.394102 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:25:47.396975 kubelet[2814]: E1212 18:25:47.396694 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:25:48.392039 kubelet[2814]: E1212 18:25:48.391662 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:25:52.393137 kubelet[2814]: E1212 18:25:52.393080 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:25:54.392889 kubelet[2814]: E1212 18:25:54.392723 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:25:58.394039 kubelet[2814]: E1212 18:25:58.393915 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:25:59.398556 kubelet[2814]: E1212 18:25:59.398474 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:26:01.395923 kubelet[2814]: E1212 18:26:01.395874 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:26:02.393445 kubelet[2814]: E1212 18:26:02.393369 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:26:05.393005 kubelet[2814]: E1212 18:26:05.392875 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:26:06.393949 kubelet[2814]: E1212 18:26:06.393140 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:26:07.396120 kubelet[2814]: E1212 18:26:07.396045 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:26:08.392255 kubelet[2814]: E1212 18:26:08.391686 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:26:08.392878 kubelet[2814]: E1212 18:26:08.392497 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:26:09.396236 kubelet[2814]: E1212 18:26:09.396160 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:26:11.396945 containerd[1630]: time="2025-12-12T18:26:11.396829552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:26:11.548805 containerd[1630]: time="2025-12-12T18:26:11.548717576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:26:11.549864 containerd[1630]: time="2025-12-12T18:26:11.549768662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:26:11.549864 containerd[1630]: time="2025-12-12T18:26:11.549795842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:26:11.550173 kubelet[2814]: E1212 18:26:11.550132 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:26:11.550756 kubelet[2814]: E1212 18:26:11.550177 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:26:11.550756 kubelet[2814]: E1212 18:26:11.550243 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:26:11.552477 containerd[1630]: time="2025-12-12T18:26:11.552247723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:26:11.689606 containerd[1630]: time="2025-12-12T18:26:11.689241901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:26:11.690531 containerd[1630]: time="2025-12-12T18:26:11.690469437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:26:11.690531 containerd[1630]: time="2025-12-12T18:26:11.690497947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:26:11.690695 kubelet[2814]: E1212 18:26:11.690656 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:26:11.690765 kubelet[2814]: E1212 18:26:11.690702 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:26:11.690798 kubelet[2814]: E1212 18:26:11.690765 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:26:11.690853 kubelet[2814]: E1212 18:26:11.690803 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:26:13.392099 kubelet[2814]: E1212 18:26:13.391567 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:26:13.394638 kubelet[2814]: E1212 18:26:13.394584 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:26:15.401325 containerd[1630]: time="2025-12-12T18:26:15.401282291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:26:15.534440 containerd[1630]: time="2025-12-12T18:26:15.534385559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:26:15.535513 containerd[1630]: time="2025-12-12T18:26:15.535458305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:26:15.535562 containerd[1630]: time="2025-12-12T18:26:15.535537724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:26:15.535741 kubelet[2814]: E1212 18:26:15.535701 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:26:15.535741 kubelet[2814]: E1212 18:26:15.535740 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:26:15.537174 kubelet[2814]: E1212 18:26:15.535807 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-wf2rc_calico-apiserver(3f177278-7ed2-426c-a0d0-27da05aa7f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:26:15.537174 kubelet[2814]: E1212 18:26:15.535834 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:26:19.395284 containerd[1630]: time="2025-12-12T18:26:19.395247224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:26:19.528430 containerd[1630]: time="2025-12-12T18:26:19.528385379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:26:19.529532 containerd[1630]: time="2025-12-12T18:26:19.529453726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:26:19.529965 containerd[1630]: time="2025-12-12T18:26:19.529504706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:26:19.530047 kubelet[2814]: E1212 18:26:19.529748 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:26:19.530047 kubelet[2814]: E1212 18:26:19.529794 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:26:19.530047 kubelet[2814]: E1212 18:26:19.529874 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57947d7c9d-zl2bt_calico-system(d4234e62-fee9-4e5f-91a6-36421f56e51b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:26:19.530047 kubelet[2814]: E1212 18:26:19.529905 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:26:20.394265 containerd[1630]: time="2025-12-12T18:26:20.393553595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:26:20.547346 containerd[1630]: time="2025-12-12T18:26:20.547305356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:26:20.548759 containerd[1630]: time="2025-12-12T18:26:20.548664882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:26:20.548759 containerd[1630]: time="2025-12-12T18:26:20.548737092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:26:20.549008 kubelet[2814]: E1212 18:26:20.548943 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:26:20.549357 kubelet[2814]: E1212 18:26:20.549108 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:26:20.549855 kubelet[2814]: E1212 18:26:20.549597 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:26:20.550361 containerd[1630]: time="2025-12-12T18:26:20.550325867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:26:20.672602 containerd[1630]: time="2025-12-12T18:26:20.672430105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:26:20.673786 containerd[1630]: time="2025-12-12T18:26:20.673742880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:26:20.673851 containerd[1630]: time="2025-12-12T18:26:20.673799530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:26:20.674110 kubelet[2814]: E1212 18:26:20.674079 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:26:20.676063 kubelet[2814]: E1212 18:26:20.676037 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:26:20.676552 kubelet[2814]: E1212 18:26:20.676229 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dl6p8_calico-system(a8fbc410-f738-4c23-8813-68d1a7480f15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:26:20.676552 kubelet[2814]: E1212 18:26:20.676261 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:26:20.676925 containerd[1630]: time="2025-12-12T18:26:20.676900171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:26:20.801476 containerd[1630]: time="2025-12-12T18:26:20.801387281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:26:20.803921 containerd[1630]: time="2025-12-12T18:26:20.803870013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:26:20.804015 containerd[1630]: time="2025-12-12T18:26:20.803947904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:26:20.804296 kubelet[2814]: E1212 18:26:20.804228 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:26:20.804296 kubelet[2814]: E1212 18:26:20.804273 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:26:20.804488 kubelet[2814]: E1212 18:26:20.804465 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:26:20.804809 kubelet[2814]: E1212 18:26:20.804773 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:26:26.392680 kubelet[2814]: E1212 18:26:26.392627 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:26:26.393794 containerd[1630]: time="2025-12-12T18:26:26.392762618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:26:26.395090 kubelet[2814]: E1212 18:26:26.395027 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:26:26.517645 containerd[1630]: time="2025-12-12T18:26:26.517593511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:26:26.518921 containerd[1630]: time="2025-12-12T18:26:26.518854858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:26:26.519074 containerd[1630]: time="2025-12-12T18:26:26.518938897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:26:26.519163 kubelet[2814]: E1212 18:26:26.519109 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:26:26.519163 kubelet[2814]: E1212 18:26:26.519150 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:26:26.519303 kubelet[2814]: E1212 18:26:26.519220 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-gclmc_calico-apiserver(c1ce557f-fee1-488f-bf03-0d09f4a1964c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:26:26.519303 kubelet[2814]: E1212 18:26:26.519251 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:26:31.396032 kubelet[2814]: E1212 18:26:31.394404 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:26:31.398273 kubelet[2814]: E1212 18:26:31.398242 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:26:31.400740 kubelet[2814]: E1212 18:26:31.400681 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:26:33.395658 kubelet[2814]: E1212 18:26:33.395602 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:26:37.393973 kubelet[2814]: E1212 18:26:37.393702 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:26:40.394078 kubelet[2814]: E1212 18:26:40.394023 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:26:40.395439 kubelet[2814]: E1212 18:26:40.395405 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:26:42.393921 kubelet[2814]: E1212 18:26:42.393849 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:26:44.392801 kubelet[2814]: E1212 18:26:44.392702 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:26:46.391708 kubelet[2814]: E1212 18:26:46.391661 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:26:48.392785 kubelet[2814]: E1212 18:26:48.392730 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:26:51.394251 kubelet[2814]: E1212 18:26:51.394193 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:26:54.391474 kubelet[2814]: E1212 18:26:54.391366 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:26:55.403568 kubelet[2814]: E1212 18:26:55.403441 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:26:55.407939 kubelet[2814]: E1212 18:26:55.407670 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:26:55.408930 kubelet[2814]: E1212 18:26:55.408880 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:26:57.395025 kubelet[2814]: E1212 18:26:57.393458 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:27:02.393526 kubelet[2814]: E1212 18:27:02.392716 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:27:04.393113 kubelet[2814]: E1212 18:27:04.393018 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:27:06.393017 kubelet[2814]: E1212 18:27:06.392713 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:27:07.394051 kubelet[2814]: E1212 18:27:07.393952 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:27:07.396242 kubelet[2814]: E1212 18:27:07.395659 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:27:10.394084 kubelet[2814]: E1212 18:27:10.393163 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:27:11.394214 kubelet[2814]: E1212 18:27:11.394020 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:27:15.392785 kubelet[2814]: E1212 18:27:15.392469 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:27:16.392887 kubelet[2814]: E1212 18:27:16.392843 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:27:17.394506 kubelet[2814]: E1212 18:27:17.394004 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:27:18.392344 kubelet[2814]: E1212 18:27:18.392291 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:27:20.396366 kubelet[2814]: E1212 18:27:20.396313 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:27:21.394483 kubelet[2814]: E1212 18:27:21.394354 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:27:21.396315 kubelet[2814]: E1212 18:27:21.396292 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:27:21.396679 kubelet[2814]: E1212 18:27:21.396571 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:27:22.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.207.166:22-139.178.89.65:60320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:22.556610 systemd[1]: Started sshd@7-172.234.207.166:22-139.178.89.65:60320.service - OpenSSH per-connection server daemon (139.178.89.65:60320). Dec 12 18:27:22.557581 kernel: kauditd_printk_skb: 422 callbacks suppressed Dec 12 18:27:22.557643 kernel: audit: type=1130 audit(1765564042.555:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.207.166:22-139.178.89.65:60320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:22.874000 audit[5091]: USER_ACCT pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:22.877685 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:22.883341 kernel: audit: type=1101 audit(1765564042.874:750): pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:22.883372 sshd[5091]: Accepted publickey for core from 139.178.89.65 port 60320 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:22.875000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:22.897026 kernel: audit: type=1103 audit(1765564042.875:751): pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:22.897324 kernel: audit: type=1006 audit(1765564042.875:752): pid=5091 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 12 18:27:22.905800 kernel: audit: type=1300 audit(1765564042.875:752): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddeeee700 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:22.875000 audit[5091]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddeeee700 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:22.905013 systemd-logind[1594]: New session 8 of user core. Dec 12 18:27:22.875000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:22.910045 kernel: audit: type=1327 audit(1765564042.875:752): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:22.911120 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:27:22.916000 audit[5091]: USER_START pid=5091 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:22.927121 kernel: audit: type=1105 audit(1765564042.916:753): pid=5091 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:22.919000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:22.938017 kernel: audit: type=1103 audit(1765564042.919:754): pid=5099 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:23.178062 sshd[5099]: Connection closed by 139.178.89.65 port 60320 Dec 12 18:27:23.178399 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:23.180000 audit[5091]: USER_END pid=5091 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:23.187430 systemd[1]: sshd@7-172.234.207.166:22-139.178.89.65:60320.service: Deactivated successfully. Dec 12 18:27:23.192056 kernel: audit: type=1106 audit(1765564043.180:755): pid=5091 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:23.192119 kernel: audit: type=1104 audit(1765564043.180:756): pid=5091 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:23.180000 audit[5091]: CRED_DISP pid=5091 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:23.193256 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:27:23.199044 systemd-logind[1594]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:27:23.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.207.166:22-139.178.89.65:60320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:23.200932 systemd-logind[1594]: Removed session 8. Dec 12 18:27:24.391863 kubelet[2814]: E1212 18:27:24.391822 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:27:27.395084 kubelet[2814]: E1212 18:27:27.395027 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:27:28.247292 systemd[1]: Started sshd@8-172.234.207.166:22-139.178.89.65:60322.service - OpenSSH per-connection server daemon (139.178.89.65:60322). Dec 12 18:27:28.249440 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:27:28.249479 kernel: audit: type=1130 audit(1765564048.246:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.207.166:22-139.178.89.65:60322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:28.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.207.166:22-139.178.89.65:60322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:28.393064 kubelet[2814]: E1212 18:27:28.392963 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:27:28.569000 audit[5118]: USER_ACCT pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.572227 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:28.573501 sshd[5118]: Accepted publickey for core from 139.178.89.65 port 60322 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:28.570000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.580820 kernel: audit: type=1101 audit(1765564048.569:759): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.580893 kernel: audit: type=1103 audit(1765564048.570:760): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.585327 systemd-logind[1594]: New session 9 of user core. Dec 12 18:27:28.588120 kernel: audit: type=1006 audit(1765564048.570:761): pid=5118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 12 18:27:28.570000 audit[5118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda399f7c0 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:28.593187 kernel: audit: type=1300 audit(1765564048.570:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda399f7c0 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:28.570000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:28.600531 kernel: audit: type=1327 audit(1765564048.570:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:28.601134 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:27:28.604000 audit[5118]: USER_START pid=5118 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.612000 audit[5121]: CRED_ACQ pid=5121 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.616160 kernel: audit: type=1105 audit(1765564048.604:762): pid=5118 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.616204 kernel: audit: type=1103 audit(1765564048.612:763): pid=5121 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.800799 sshd[5121]: Connection closed by 139.178.89.65 port 60322 Dec 12 18:27:28.802189 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:28.803000 audit[5118]: USER_END pid=5118 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.807833 systemd[1]: sshd@8-172.234.207.166:22-139.178.89.65:60322.service: Deactivated successfully. Dec 12 18:27:28.811515 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:27:28.813523 kernel: audit: type=1106 audit(1765564048.803:764): pid=5118 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.803000 audit[5118]: CRED_DISP pid=5118 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.817083 systemd-logind[1594]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:27:28.822040 kernel: audit: type=1104 audit(1765564048.803:765): pid=5118 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:28.820914 systemd-logind[1594]: Removed session 9. Dec 12 18:27:28.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.207.166:22-139.178.89.65:60322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:30.392863 kubelet[2814]: E1212 18:27:30.392244 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:27:31.398160 kubelet[2814]: E1212 18:27:31.398061 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:27:31.399761 kubelet[2814]: E1212 18:27:31.399729 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:27:32.392488 kubelet[2814]: E1212 18:27:32.392440 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:27:33.392007 kubelet[2814]: E1212 18:27:33.391578 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:27:33.876751 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:27:33.877052 kernel: audit: type=1130 audit(1765564053.866:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.207.166:22-139.178.89.65:35790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:33.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.207.166:22-139.178.89.65:35790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:33.867574 systemd[1]: Started sshd@9-172.234.207.166:22-139.178.89.65:35790.service - OpenSSH per-connection server daemon (139.178.89.65:35790). Dec 12 18:27:34.198000 audit[5134]: USER_ACCT pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.207495 kernel: audit: type=1101 audit(1765564054.198:768): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.207585 sshd[5134]: Accepted publickey for core from 139.178.89.65 port 35790 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:34.209468 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:34.207000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.219049 kernel: audit: type=1103 audit(1765564054.207:769): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.226039 kernel: audit: type=1006 audit(1765564054.208:770): pid=5134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 18:27:34.208000 audit[5134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf4347de0 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:34.232154 systemd-logind[1594]: New session 10 of user core. Dec 12 18:27:34.236030 kernel: audit: type=1300 audit(1765564054.208:770): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf4347de0 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:34.208000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:34.244013 kernel: audit: type=1327 audit(1765564054.208:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:34.246607 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:27:34.252000 audit[5134]: USER_START pid=5134 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.262010 kernel: audit: type=1105 audit(1765564054.252:771): pid=5134 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.261000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.271036 kernel: audit: type=1103 audit(1765564054.261:772): pid=5137 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.445558 sshd[5137]: Connection closed by 139.178.89.65 port 35790 Dec 12 18:27:34.447224 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:34.449000 audit[5134]: USER_END pid=5134 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.459043 kernel: audit: type=1106 audit(1765564054.449:773): pid=5134 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.461730 systemd[1]: sshd@9-172.234.207.166:22-139.178.89.65:35790.service: Deactivated successfully. Dec 12 18:27:34.457000 audit[5134]: CRED_DISP pid=5134 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.465994 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:27:34.471008 systemd-logind[1594]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:27:34.473014 kernel: audit: type=1104 audit(1765564054.457:774): pid=5134 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:34.473141 systemd-logind[1594]: Removed session 10. Dec 12 18:27:34.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.207.166:22-139.178.89.65:35790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:36.392457 kubelet[2814]: E1212 18:27:36.392405 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:27:39.395034 kubelet[2814]: E1212 18:27:39.394953 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:27:39.396290 kubelet[2814]: E1212 18:27:39.396250 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:27:39.512209 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:27:39.512306 kernel: audit: type=1130 audit(1765564059.510:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.234.207.166:22-139.178.89.65:35798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:39.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.234.207.166:22-139.178.89.65:35798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:39.509860 systemd[1]: Started sshd@10-172.234.207.166:22-139.178.89.65:35798.service - OpenSSH per-connection server daemon (139.178.89.65:35798). Dec 12 18:27:39.816000 audit[5184]: USER_ACCT pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:39.820458 sshd[5184]: Accepted publickey for core from 139.178.89.65 port 35798 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:39.821908 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:39.826019 kernel: audit: type=1101 audit(1765564059.816:777): pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:39.818000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:39.834488 kernel: audit: type=1103 audit(1765564059.818:778): pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:39.834544 kernel: audit: type=1006 audit(1765564059.818:779): pid=5184 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 12 18:27:39.840451 kernel: audit: type=1300 audit(1765564059.818:779): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfb37ef40 a2=3 a3=0 items=0 ppid=1 pid=5184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:39.818000 audit[5184]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfb37ef40 a2=3 a3=0 items=0 ppid=1 pid=5184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:39.840800 systemd-logind[1594]: New session 11 of user core. Dec 12 18:27:39.818000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:39.846876 kernel: audit: type=1327 audit(1765564059.818:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:39.847261 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:27:39.852000 audit[5184]: USER_START pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:39.863609 kernel: audit: type=1105 audit(1765564059.852:780): pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:39.863000 audit[5187]: CRED_ACQ pid=5187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:39.872030 kernel: audit: type=1103 audit(1765564059.863:781): pid=5187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.056009 sshd[5187]: Connection closed by 139.178.89.65 port 35798 Dec 12 18:27:40.057268 sshd-session[5184]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:40.058000 audit[5184]: USER_END pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.062452 systemd-logind[1594]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:27:40.064790 systemd[1]: sshd@10-172.234.207.166:22-139.178.89.65:35798.service: Deactivated successfully. Dec 12 18:27:40.068601 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:27:40.072691 systemd-logind[1594]: Removed session 11. Dec 12 18:27:40.059000 audit[5184]: CRED_DISP pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.091868 kernel: audit: type=1106 audit(1765564060.058:782): pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.091930 kernel: audit: type=1104 audit(1765564060.059:783): pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.234.207.166:22-139.178.89.65:35798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.119090 systemd[1]: Started sshd@11-172.234.207.166:22-139.178.89.65:35802.service - OpenSSH per-connection server daemon (139.178.89.65:35802). Dec 12 18:27:40.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.234.207.166:22-139.178.89.65:35802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.432000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.434302 sshd[5199]: Accepted publickey for core from 139.178.89.65 port 35802 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:40.437000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.437000 audit[5199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6f5784c0 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:40.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:40.438687 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:40.445396 systemd-logind[1594]: New session 12 of user core. Dec 12 18:27:40.453129 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:27:40.456000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.458000 audit[5202]: CRED_ACQ pid=5202 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.695157 sshd[5202]: Connection closed by 139.178.89.65 port 35802 Dec 12 18:27:40.696173 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:40.697000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.697000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:40.702277 systemd-logind[1594]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:27:40.704434 systemd[1]: sshd@11-172.234.207.166:22-139.178.89.65:35802.service: Deactivated successfully. Dec 12 18:27:40.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.234.207.166:22-139.178.89.65:35802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:40.709964 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:27:40.714663 systemd-logind[1594]: Removed session 12. Dec 12 18:27:40.757529 systemd[1]: Started sshd@12-172.234.207.166:22-139.178.89.65:37052.service - OpenSSH per-connection server daemon (139.178.89.65:37052). Dec 12 18:27:40.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.234.207.166:22-139.178.89.65:37052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:41.072000 audit[5212]: USER_ACCT pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:41.074121 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 37052 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:41.074000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:41.074000 audit[5212]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6e1b5620 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:41.074000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:41.075952 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:41.084127 systemd-logind[1594]: New session 13 of user core. Dec 12 18:27:41.093164 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:27:41.098000 audit[5212]: USER_START pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:41.100000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:41.315282 sshd[5215]: Connection closed by 139.178.89.65 port 37052 Dec 12 18:27:41.315928 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:41.316000 audit[5212]: USER_END pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:41.317000 audit[5212]: CRED_DISP pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:41.321939 systemd[1]: sshd@12-172.234.207.166:22-139.178.89.65:37052.service: Deactivated successfully. Dec 12 18:27:41.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.234.207.166:22-139.178.89.65:37052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:41.324200 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:27:41.325753 systemd-logind[1594]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:27:41.327748 systemd-logind[1594]: Removed session 13. Dec 12 18:27:44.393634 kubelet[2814]: E1212 18:27:44.393585 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:27:44.394917 containerd[1630]: time="2025-12-12T18:27:44.394540591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:27:44.525770 containerd[1630]: time="2025-12-12T18:27:44.525703812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:27:44.526863 containerd[1630]: time="2025-12-12T18:27:44.526823725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:27:44.527110 containerd[1630]: time="2025-12-12T18:27:44.526909845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:27:44.527346 kubelet[2814]: E1212 18:27:44.527294 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:27:44.527396 kubelet[2814]: E1212 18:27:44.527363 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:27:44.528641 kubelet[2814]: E1212 18:27:44.527472 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-wf2rc_calico-apiserver(3f177278-7ed2-426c-a0d0-27da05aa7f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:27:44.528641 kubelet[2814]: E1212 18:27:44.527561 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:27:45.393900 containerd[1630]: time="2025-12-12T18:27:45.393674958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:27:45.517252 containerd[1630]: time="2025-12-12T18:27:45.517192532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:27:45.518960 containerd[1630]: time="2025-12-12T18:27:45.518854882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:27:45.518960 containerd[1630]: time="2025-12-12T18:27:45.518935192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:27:45.521206 kubelet[2814]: E1212 18:27:45.521150 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:27:45.521782 kubelet[2814]: E1212 18:27:45.521188 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:27:45.521782 kubelet[2814]: E1212 18:27:45.521574 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:27:45.523205 containerd[1630]: time="2025-12-12T18:27:45.523001938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:27:45.653145 containerd[1630]: time="2025-12-12T18:27:45.653038584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:27:45.654249 containerd[1630]: time="2025-12-12T18:27:45.654113018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:27:45.654249 containerd[1630]: time="2025-12-12T18:27:45.654201517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:27:45.654593 kubelet[2814]: E1212 18:27:45.654539 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:27:45.654735 kubelet[2814]: E1212 18:27:45.654667 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:27:45.654857 kubelet[2814]: E1212 18:27:45.654822 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6755b8ddc4-gfq4r_calico-system(c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:27:45.655059 kubelet[2814]: E1212 18:27:45.655021 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:27:46.375326 systemd[1]: Started sshd@13-172.234.207.166:22-139.178.89.65:37058.service - OpenSSH per-connection server daemon (139.178.89.65:37058). Dec 12 18:27:46.384880 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 18:27:46.384916 kernel: audit: type=1130 audit(1765564066.374:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.234.207.166:22-139.178.89.65:37058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:46.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.234.207.166:22-139.178.89.65:37058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:46.680000 audit[5226]: USER_ACCT pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.683844 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:46.691104 kernel: audit: type=1101 audit(1765564066.680:804): pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.691137 sshd[5226]: Accepted publickey for core from 139.178.89.65 port 37058 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:46.691644 systemd-logind[1594]: New session 14 of user core. Dec 12 18:27:46.680000 audit[5226]: CRED_ACQ pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.704801 kernel: audit: type=1103 audit(1765564066.680:805): pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.704864 kernel: audit: type=1006 audit(1765564066.680:806): pid=5226 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 18:27:46.708212 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:27:46.680000 audit[5226]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4e4c7e50 a2=3 a3=0 items=0 ppid=1 pid=5226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:46.680000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:46.721439 kernel: audit: type=1300 audit(1765564066.680:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4e4c7e50 a2=3 a3=0 items=0 ppid=1 pid=5226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:46.721481 kernel: audit: type=1327 audit(1765564066.680:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:46.723000 audit[5226]: USER_START pid=5226 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.725000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.738003 kernel: audit: type=1105 audit(1765564066.723:807): pid=5226 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.738062 kernel: audit: type=1103 audit(1765564066.725:808): pid=5229 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.910255 sshd[5229]: Connection closed by 139.178.89.65 port 37058 Dec 12 18:27:46.912281 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:46.913000 audit[5226]: USER_END pid=5226 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.925029 kernel: audit: type=1106 audit(1765564066.913:809): pid=5226 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.925224 systemd[1]: sshd@13-172.234.207.166:22-139.178.89.65:37058.service: Deactivated successfully. Dec 12 18:27:46.931576 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:27:46.934419 systemd-logind[1594]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:27:46.913000 audit[5226]: CRED_DISP pid=5226 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.943003 kernel: audit: type=1104 audit(1765564066.913:810): pid=5226 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:46.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.234.207.166:22-139.178.89.65:37058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:46.943294 systemd-logind[1594]: Removed session 14. Dec 12 18:27:48.391948 kubelet[2814]: E1212 18:27:48.391898 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:27:50.392659 containerd[1630]: time="2025-12-12T18:27:50.392404250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:27:50.517955 containerd[1630]: time="2025-12-12T18:27:50.517916052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:27:50.519298 containerd[1630]: time="2025-12-12T18:27:50.519219215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:27:50.519375 containerd[1630]: time="2025-12-12T18:27:50.519276634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:27:50.519736 kubelet[2814]: E1212 18:27:50.519647 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:27:50.520262 kubelet[2814]: E1212 18:27:50.519742 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:27:50.520262 kubelet[2814]: E1212 18:27:50.519837 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57947d7c9d-zl2bt_calico-system(d4234e62-fee9-4e5f-91a6-36421f56e51b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:27:50.520262 kubelet[2814]: E1212 18:27:50.519870 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:27:51.984547 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:27:51.984879 kernel: audit: type=1130 audit(1765564071.973:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.234.207.166:22-139.178.89.65:55926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:51.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.234.207.166:22-139.178.89.65:55926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:51.973888 systemd[1]: Started sshd@14-172.234.207.166:22-139.178.89.65:55926.service - OpenSSH per-connection server daemon (139.178.89.65:55926). Dec 12 18:27:52.283000 audit[5244]: USER_ACCT pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.293131 kernel: audit: type=1101 audit(1765564072.283:813): pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.293617 sshd[5244]: Accepted publickey for core from 139.178.89.65 port 55926 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:52.295533 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:52.293000 audit[5244]: CRED_ACQ pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.304013 kernel: audit: type=1103 audit(1765564072.293:814): pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.310245 kernel: audit: type=1006 audit(1765564072.293:815): pid=5244 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 18:27:52.293000 audit[5244]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee8c99fe0 a2=3 a3=0 items=0 ppid=1 pid=5244 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:52.314685 systemd-logind[1594]: New session 15 of user core. Dec 12 18:27:52.293000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:52.320133 kernel: audit: type=1300 audit(1765564072.293:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee8c99fe0 a2=3 a3=0 items=0 ppid=1 pid=5244 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:52.320182 kernel: audit: type=1327 audit(1765564072.293:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:52.323016 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:27:52.328000 audit[5244]: USER_START pid=5244 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.340011 kernel: audit: type=1105 audit(1765564072.328:816): pid=5244 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.340064 kernel: audit: type=1103 audit(1765564072.334:817): pid=5247 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.334000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.394364 containerd[1630]: time="2025-12-12T18:27:52.394121651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:27:52.513208 sshd[5247]: Connection closed by 139.178.89.65 port 55926 Dec 12 18:27:52.513734 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:52.514000 audit[5244]: USER_END pid=5244 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.518185 systemd[1]: sshd@14-172.234.207.166:22-139.178.89.65:55926.service: Deactivated successfully. Dec 12 18:27:52.521102 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:27:52.522957 systemd-logind[1594]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:27:52.525158 kernel: audit: type=1106 audit(1765564072.514:818): pid=5244 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.514000 audit[5244]: CRED_DISP pid=5244 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.527422 systemd-logind[1594]: Removed session 15. Dec 12 18:27:52.535036 kernel: audit: type=1104 audit(1765564072.514:819): pid=5244 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:52.535268 containerd[1630]: time="2025-12-12T18:27:52.535220605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:27:52.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.234.207.166:22-139.178.89.65:55926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:52.536490 containerd[1630]: time="2025-12-12T18:27:52.536441838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:27:52.536615 containerd[1630]: time="2025-12-12T18:27:52.536505707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:27:52.536885 kubelet[2814]: E1212 18:27:52.536838 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:27:52.536885 kubelet[2814]: E1212 18:27:52.536888 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:27:52.537320 kubelet[2814]: E1212 18:27:52.536961 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dl6p8_calico-system(a8fbc410-f738-4c23-8813-68d1a7480f15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:27:52.537320 kubelet[2814]: E1212 18:27:52.537044 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:27:54.392706 containerd[1630]: time="2025-12-12T18:27:54.392669294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:27:54.527413 containerd[1630]: time="2025-12-12T18:27:54.527240568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:27:54.528238 containerd[1630]: time="2025-12-12T18:27:54.528182282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:27:54.528495 containerd[1630]: time="2025-12-12T18:27:54.528227292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:27:54.528767 kubelet[2814]: E1212 18:27:54.528731 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:27:54.529864 kubelet[2814]: E1212 18:27:54.528852 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:27:54.529864 kubelet[2814]: E1212 18:27:54.529186 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:27:54.530877 containerd[1630]: time="2025-12-12T18:27:54.530805769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:27:54.674442 containerd[1630]: time="2025-12-12T18:27:54.674107786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:27:54.675491 containerd[1630]: time="2025-12-12T18:27:54.675399510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:27:54.675693 containerd[1630]: time="2025-12-12T18:27:54.675665458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:27:54.676080 kubelet[2814]: E1212 18:27:54.676007 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:27:54.676080 kubelet[2814]: E1212 18:27:54.676049 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:27:54.676340 kubelet[2814]: E1212 18:27:54.676269 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xnbvh_calico-system(91aeba92-11d6-4129-85e3-7dedd0625bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:27:54.676418 kubelet[2814]: E1212 18:27:54.676395 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:27:57.586277 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:27:57.586360 kernel: audit: type=1130 audit(1765564077.576:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.234.207.166:22-139.178.89.65:55938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:57.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.234.207.166:22-139.178.89.65:55938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:57.577232 systemd[1]: Started sshd@15-172.234.207.166:22-139.178.89.65:55938.service - OpenSSH per-connection server daemon (139.178.89.65:55938). Dec 12 18:27:57.881000 audit[5261]: USER_ACCT pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:57.885202 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:27:57.891526 sshd[5261]: Accepted publickey for core from 139.178.89.65 port 55938 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:27:57.892082 kernel: audit: type=1101 audit(1765564077.881:822): pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:57.882000 audit[5261]: CRED_ACQ pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:57.900022 kernel: audit: type=1103 audit(1765564077.882:823): pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:57.906273 kernel: audit: type=1006 audit(1765564077.882:824): pid=5261 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 18:27:57.906090 systemd-logind[1594]: New session 16 of user core. Dec 12 18:27:57.882000 audit[5261]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaf49fe30 a2=3 a3=0 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:57.919195 kernel: audit: type=1300 audit(1765564077.882:824): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaf49fe30 a2=3 a3=0 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:27:57.919237 kernel: audit: type=1327 audit(1765564077.882:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:57.882000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:27:57.920261 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:27:57.934020 kernel: audit: type=1105 audit(1765564077.922:825): pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:57.922000 audit[5261]: USER_START pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:57.933000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:57.943068 kernel: audit: type=1103 audit(1765564077.933:826): pid=5264 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:58.118167 sshd[5264]: Connection closed by 139.178.89.65 port 55938 Dec 12 18:27:58.118744 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Dec 12 18:27:58.119000 audit[5261]: USER_END pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:58.124284 systemd[1]: sshd@15-172.234.207.166:22-139.178.89.65:55938.service: Deactivated successfully. Dec 12 18:27:58.127572 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:27:58.130099 kernel: audit: type=1106 audit(1765564078.119:827): pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:58.129031 systemd-logind[1594]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:27:58.119000 audit[5261]: CRED_DISP pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:58.135084 systemd-logind[1594]: Removed session 16. Dec 12 18:27:58.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.234.207.166:22-139.178.89.65:55938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:27:58.140006 kernel: audit: type=1104 audit(1765564078.119:828): pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:27:59.396896 containerd[1630]: time="2025-12-12T18:27:59.396706270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:27:59.400382 kubelet[2814]: E1212 18:27:59.400072 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:27:59.534569 containerd[1630]: time="2025-12-12T18:27:59.534320056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:27:59.535948 containerd[1630]: time="2025-12-12T18:27:59.535658578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:27:59.535948 containerd[1630]: time="2025-12-12T18:27:59.535683198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:27:59.536576 kubelet[2814]: E1212 18:27:59.536305 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:27:59.536576 kubelet[2814]: E1212 18:27:59.536348 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:27:59.536576 kubelet[2814]: E1212 18:27:59.536427 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d49f44685-gclmc_calico-apiserver(c1ce557f-fee1-488f-bf03-0d09f4a1964c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:27:59.536576 kubelet[2814]: E1212 18:27:59.536461 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:28:00.394903 kubelet[2814]: E1212 18:28:00.394698 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:28:03.190388 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:28:03.190497 kernel: audit: type=1130 audit(1765564083.179:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.234.207.166:22-139.178.89.65:55102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:03.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.234.207.166:22-139.178.89.65:55102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:03.180234 systemd[1]: Started sshd@16-172.234.207.166:22-139.178.89.65:55102.service - OpenSSH per-connection server daemon (139.178.89.65:55102). Dec 12 18:28:03.394269 kubelet[2814]: E1212 18:28:03.394194 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:28:03.495393 sshd[5296]: Accepted publickey for core from 139.178.89.65 port 55102 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:28:03.494000 audit[5296]: USER_ACCT pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.506023 kernel: audit: type=1101 audit(1765564083.494:831): pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.506719 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:03.505000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.516025 kernel: audit: type=1103 audit(1765564083.505:832): pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.521236 systemd-logind[1594]: New session 17 of user core. Dec 12 18:28:03.505000 audit[5296]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1f679500 a2=3 a3=0 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:03.531037 kernel: audit: type=1006 audit(1765564083.505:833): pid=5296 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 18:28:03.531083 kernel: audit: type=1300 audit(1765564083.505:833): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1f679500 a2=3 a3=0 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:03.505000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:03.539250 kernel: audit: type=1327 audit(1765564083.505:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:03.540313 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:28:03.545000 audit[5296]: USER_START pid=5296 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.555010 kernel: audit: type=1105 audit(1765564083.545:834): pid=5296 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.554000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.562014 kernel: audit: type=1103 audit(1765564083.554:835): pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.732930 sshd[5299]: Connection closed by 139.178.89.65 port 55102 Dec 12 18:28:03.733803 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:03.734000 audit[5296]: USER_END pid=5296 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.739129 systemd[1]: sshd@16-172.234.207.166:22-139.178.89.65:55102.service: Deactivated successfully. Dec 12 18:28:03.741661 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:28:03.734000 audit[5296]: CRED_DISP pid=5296 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.745095 systemd-logind[1594]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:28:03.746058 kernel: audit: type=1106 audit(1765564083.734:836): pid=5296 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.746132 kernel: audit: type=1104 audit(1765564083.734:837): pid=5296 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:03.748297 systemd-logind[1594]: Removed session 17. Dec 12 18:28:03.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.234.207.166:22-139.178.89.65:55102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:03.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.234.207.166:22-139.178.89.65:55106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:03.796246 systemd[1]: Started sshd@17-172.234.207.166:22-139.178.89.65:55106.service - OpenSSH per-connection server daemon (139.178.89.65:55106). Dec 12 18:28:04.101000 audit[5312]: USER_ACCT pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.103099 sshd[5312]: Accepted publickey for core from 139.178.89.65 port 55106 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:28:04.104726 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:04.103000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.103000 audit[5312]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe35fa1ac0 a2=3 a3=0 items=0 ppid=1 pid=5312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:04.103000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:04.111457 systemd-logind[1594]: New session 18 of user core. Dec 12 18:28:04.118147 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:28:04.120000 audit[5312]: USER_START pid=5312 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.122000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.458115 sshd[5315]: Connection closed by 139.178.89.65 port 55106 Dec 12 18:28:04.458967 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:04.459000 audit[5312]: USER_END pid=5312 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.459000 audit[5312]: CRED_DISP pid=5312 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.463581 systemd-logind[1594]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:28:04.464937 systemd[1]: sshd@17-172.234.207.166:22-139.178.89.65:55106.service: Deactivated successfully. Dec 12 18:28:04.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.234.207.166:22-139.178.89.65:55106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:04.469623 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:28:04.474440 systemd-logind[1594]: Removed session 18. Dec 12 18:28:04.519408 systemd[1]: Started sshd@18-172.234.207.166:22-139.178.89.65:55108.service - OpenSSH per-connection server daemon (139.178.89.65:55108). Dec 12 18:28:04.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.234.207.166:22-139.178.89.65:55108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:04.834000 audit[5325]: USER_ACCT pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.835380 sshd[5325]: Accepted publickey for core from 139.178.89.65 port 55108 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:28:04.836000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.836000 audit[5325]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc46c08270 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:04.836000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:04.840275 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:04.851449 systemd-logind[1594]: New session 19 of user core. Dec 12 18:28:04.858305 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:28:04.862000 audit[5325]: USER_START pid=5325 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:04.865000 audit[5328]: CRED_ACQ pid=5328 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:05.575000 audit[5367]: NETFILTER_CFG table=filter:133 family=2 entries=26 op=nft_register_rule pid=5367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:28:05.575000 audit[5367]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe3959f530 a2=0 a3=7ffe3959f51c items=0 ppid=2925 pid=5367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:05.575000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:05.584000 audit[5367]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:28:05.584000 audit[5367]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe3959f530 a2=0 a3=0 items=0 ppid=2925 pid=5367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:05.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:05.620975 sshd[5328]: Connection closed by 139.178.89.65 port 55108 Dec 12 18:28:05.621541 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:05.621000 audit[5325]: USER_END pid=5325 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:05.622000 audit[5325]: CRED_DISP pid=5325 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:05.626957 systemd[1]: sshd@18-172.234.207.166:22-139.178.89.65:55108.service: Deactivated successfully. Dec 12 18:28:05.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.234.207.166:22-139.178.89.65:55108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:05.629964 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:28:05.631203 systemd-logind[1594]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:28:05.633999 systemd-logind[1594]: Removed session 19. Dec 12 18:28:05.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.234.207.166:22-139.178.89.65:55122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:05.684107 systemd[1]: Started sshd@19-172.234.207.166:22-139.178.89.65:55122.service - OpenSSH per-connection server daemon (139.178.89.65:55122). Dec 12 18:28:05.996000 audit[5372]: USER_ACCT pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:05.998000 sshd[5372]: Accepted publickey for core from 139.178.89.65 port 55122 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:28:05.998000 audit[5372]: CRED_ACQ pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:05.999000 audit[5372]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb2fb8700 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:05.999000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:06.000505 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:06.007059 systemd-logind[1594]: New session 20 of user core. Dec 12 18:28:06.012897 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:28:06.017000 audit[5372]: USER_START pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.019000 audit[5375]: CRED_ACQ pid=5375 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.371005 sshd[5375]: Connection closed by 139.178.89.65 port 55122 Dec 12 18:28:06.371508 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:06.372000 audit[5372]: USER_END pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.372000 audit[5372]: CRED_DISP pid=5372 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.378108 systemd-logind[1594]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:28:06.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.234.207.166:22-139.178.89.65:55122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.380466 systemd[1]: sshd@19-172.234.207.166:22-139.178.89.65:55122.service: Deactivated successfully. Dec 12 18:28:06.383874 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:28:06.388194 systemd-logind[1594]: Removed session 20. Dec 12 18:28:06.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.234.207.166:22-139.178.89.65:55130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.433919 systemd[1]: Started sshd@20-172.234.207.166:22-139.178.89.65:55130.service - OpenSSH per-connection server daemon (139.178.89.65:55130). Dec 12 18:28:06.623000 audit[5389]: NETFILTER_CFG table=filter:135 family=2 entries=38 op=nft_register_rule pid=5389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:28:06.623000 audit[5389]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff1e0f2490 a2=0 a3=7fff1e0f247c items=0 ppid=2925 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:06.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:06.629000 audit[5389]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:28:06.629000 audit[5389]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff1e0f2490 a2=0 a3=0 items=0 ppid=2925 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:06.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:06.746000 audit[5385]: USER_ACCT pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.748634 sshd[5385]: Accepted publickey for core from 139.178.89.65 port 55130 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:28:06.748000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.748000 audit[5385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe43c0d610 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:06.748000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:06.750299 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:06.758292 systemd-logind[1594]: New session 21 of user core. Dec 12 18:28:06.763263 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:28:06.767000 audit[5385]: USER_START pid=5385 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.769000 audit[5390]: CRED_ACQ pid=5390 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.968556 sshd[5390]: Connection closed by 139.178.89.65 port 55130 Dec 12 18:28:06.970535 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:06.975000 audit[5385]: USER_END pid=5385 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.976000 audit[5385]: CRED_DISP pid=5385 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:06.981766 systemd[1]: sshd@20-172.234.207.166:22-139.178.89.65:55130.service: Deactivated successfully. Dec 12 18:28:06.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.234.207.166:22-139.178.89.65:55130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:06.985363 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:28:06.988353 systemd-logind[1594]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:28:06.990456 systemd-logind[1594]: Removed session 21. Dec 12 18:28:07.399007 kubelet[2814]: E1212 18:28:07.398151 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:28:08.393938 kubelet[2814]: E1212 18:28:08.393861 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:28:11.391628 kubelet[2814]: E1212 18:28:11.391252 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:28:12.039090 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 18:28:12.039188 kernel: audit: type=1130 audit(1765564092.033:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.234.207.166:22-139.178.89.65:59004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:12.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.234.207.166:22-139.178.89.65:59004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:12.034232 systemd[1]: Started sshd@21-172.234.207.166:22-139.178.89.65:59004.service - OpenSSH per-connection server daemon (139.178.89.65:59004). Dec 12 18:28:12.342000 audit[5402]: USER_ACCT pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.379496 kernel: audit: type=1101 audit(1765564092.342:880): pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.345789 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:12.382631 sshd[5402]: Accepted publickey for core from 139.178.89.65 port 59004 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:28:12.343000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.395197 systemd-logind[1594]: New session 22 of user core. Dec 12 18:28:12.397113 kernel: audit: type=1103 audit(1765564092.343:881): pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.397164 kubelet[2814]: E1212 18:28:12.396526 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-gclmc" podUID="c1ce557f-fee1-488f-bf03-0d09f4a1964c" Dec 12 18:28:12.397164 kubelet[2814]: E1212 18:28:12.396615 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d49f44685-wf2rc" podUID="3f177278-7ed2-426c-a0d0-27da05aa7f69" Dec 12 18:28:12.401098 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:28:12.405014 kernel: audit: type=1006 audit(1765564092.343:882): pid=5402 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 18:28:12.343000 audit[5402]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcecbd010 a2=3 a3=0 items=0 ppid=1 pid=5402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:12.414114 kernel: audit: type=1300 audit(1765564092.343:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcecbd010 a2=3 a3=0 items=0 ppid=1 pid=5402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:12.420004 kernel: audit: type=1327 audit(1765564092.343:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:12.343000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:12.415000 audit[5402]: USER_START pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.431035 kernel: audit: type=1105 audit(1765564092.415:883): pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.419000 audit[5405]: CRED_ACQ pid=5405 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.443013 kernel: audit: type=1103 audit(1765564092.419:884): pid=5405 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.600121 sshd[5405]: Connection closed by 139.178.89.65 port 59004 Dec 12 18:28:12.601190 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:12.614015 kernel: audit: type=1106 audit(1765564092.603:885): pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.603000 audit[5402]: USER_END pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.608755 systemd[1]: sshd@21-172.234.207.166:22-139.178.89.65:59004.service: Deactivated successfully. Dec 12 18:28:12.609155 systemd-logind[1594]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:28:12.613053 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:28:12.616773 systemd-logind[1594]: Removed session 22. Dec 12 18:28:12.603000 audit[5402]: CRED_DISP pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.625091 kernel: audit: type=1104 audit(1765564092.603:886): pid=5402 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:12.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.234.207.166:22-139.178.89.65:59004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:13.395771 kubelet[2814]: E1212 18:28:13.395714 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:28:16.394039 kubelet[2814]: E1212 18:28:16.393451 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57947d7c9d-zl2bt" podUID="d4234e62-fee9-4e5f-91a6-36421f56e51b" Dec 12 18:28:16.554000 audit[5417]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:28:16.554000 audit[5417]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd516cb70 a2=0 a3=7ffdd516cb5c items=0 ppid=2925 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:16.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:16.562000 audit[5417]: NETFILTER_CFG table=nat:138 family=2 entries=104 op=nft_register_chain pid=5417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:28:16.562000 audit[5417]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdd516cb70 a2=0 a3=7ffdd516cb5c items=0 ppid=2925 pid=5417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:16.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:28:17.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.234.207.166:22-139.178.89.65:59012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:17.666121 systemd[1]: Started sshd@22-172.234.207.166:22-139.178.89.65:59012.service - OpenSSH per-connection server daemon (139.178.89.65:59012). Dec 12 18:28:17.668202 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 18:28:17.668239 kernel: audit: type=1130 audit(1765564097.665:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.234.207.166:22-139.178.89.65:59012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:17.972000 audit[5421]: USER_ACCT pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:17.982273 kernel: audit: type=1101 audit(1765564097.972:891): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:17.976843 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:17.982962 sshd[5421]: Accepted publickey for core from 139.178.89.65 port 59012 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:28:17.975000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:17.992812 systemd-logind[1594]: New session 23 of user core. Dec 12 18:28:17.996529 kernel: audit: type=1103 audit(1765564097.975:892): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.011056 kernel: audit: type=1006 audit(1765564097.975:893): pid=5421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 18:28:18.011575 kernel: audit: type=1300 audit(1765564097.975:893): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeca60e0b0 a2=3 a3=0 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:17.975000 audit[5421]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeca60e0b0 a2=3 a3=0 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:18.003159 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:28:17.975000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:18.012000 audit[5421]: USER_START pid=5421 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.019906 kernel: audit: type=1327 audit(1765564097.975:893): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:18.019969 kernel: audit: type=1105 audit(1765564098.012:894): pid=5421 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.016000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.029088 kernel: audit: type=1103 audit(1765564098.016:895): pid=5424 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.207908 sshd[5424]: Connection closed by 139.178.89.65 port 59012 Dec 12 18:28:18.209028 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:18.210000 audit[5421]: USER_END pid=5421 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.215618 systemd[1]: sshd@22-172.234.207.166:22-139.178.89.65:59012.service: Deactivated successfully. Dec 12 18:28:18.219509 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:28:18.222019 kernel: audit: type=1106 audit(1765564098.210:896): pid=5421 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.210000 audit[5421]: CRED_DISP pid=5421 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.229429 systemd-logind[1594]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:28:18.230009 kernel: audit: type=1104 audit(1765564098.210:897): pid=5421 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:18.232156 systemd-logind[1594]: Removed session 23. Dec 12 18:28:18.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.234.207.166:22-139.178.89.65:59012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:20.396012 kubelet[2814]: E1212 18:28:20.395730 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xnbvh" podUID="91aeba92-11d6-4129-85e3-7dedd0625bf3" Dec 12 18:28:21.392518 kubelet[2814]: E1212 18:28:21.392247 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dl6p8" podUID="a8fbc410-f738-4c23-8813-68d1a7480f15" Dec 12 18:28:23.282138 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:28:23.282250 kernel: audit: type=1130 audit(1765564103.274:899): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.234.207.166:22-139.178.89.65:52202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:23.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.234.207.166:22-139.178.89.65:52202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:23.275126 systemd[1]: Started sshd@23-172.234.207.166:22-139.178.89.65:52202.service - OpenSSH per-connection server daemon (139.178.89.65:52202). Dec 12 18:28:23.393011 kubelet[2814]: E1212 18:28:23.392430 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:28:23.593475 sshd[5436]: Accepted publickey for core from 139.178.89.65 port 52202 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:28:23.592000 audit[5436]: USER_ACCT pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.595640 sshd-session[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:28:23.603010 kernel: audit: type=1101 audit(1765564103.592:900): pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.594000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.609927 systemd-logind[1594]: New session 24 of user core. Dec 12 18:28:23.613044 kernel: audit: type=1103 audit(1765564103.594:901): pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.614128 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:28:23.621061 kernel: audit: type=1006 audit(1765564103.594:902): pid=5436 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 18:28:23.594000 audit[5436]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc62c404d0 a2=3 a3=0 items=0 ppid=1 pid=5436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:23.633022 kernel: audit: type=1300 audit(1765564103.594:902): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc62c404d0 a2=3 a3=0 items=0 ppid=1 pid=5436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:28:23.594000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:23.638010 kernel: audit: type=1327 audit(1765564103.594:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:28:23.621000 audit[5436]: USER_START pid=5436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.649040 kernel: audit: type=1105 audit(1765564103.621:903): pid=5436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.629000 audit[5439]: CRED_ACQ pid=5439 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.660002 kernel: audit: type=1103 audit(1765564103.629:904): pid=5439 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.833030 sshd[5439]: Connection closed by 139.178.89.65 port 52202 Dec 12 18:28:23.833578 sshd-session[5436]: pam_unix(sshd:session): session closed for user core Dec 12 18:28:23.835000 audit[5436]: USER_END pid=5436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.846011 kernel: audit: type=1106 audit(1765564103.835:905): pid=5436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.845000 audit[5436]: CRED_DISP pid=5436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.849510 systemd[1]: sshd@23-172.234.207.166:22-139.178.89.65:52202.service: Deactivated successfully. Dec 12 18:28:23.856205 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:28:23.857084 kernel: audit: type=1104 audit(1765564103.845:906): pid=5436 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:28:23.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.234.207.166:22-139.178.89.65:52202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:28:23.858247 systemd-logind[1594]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:28:23.862161 systemd-logind[1594]: Removed session 24. Dec 12 18:28:24.393476 kubelet[2814]: E1212 18:28:24.393354 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6755b8ddc4-gfq4r" podUID="c2a87dc3-7f3a-476b-8c32-9dcd8f2f92a4" Dec 12 18:28:25.393546 kubelet[2814]: E1212 18:28:25.393501 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:28:25.396784 kubelet[2814]: E1212 18:28:25.396753 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15"