Aug 13 01:14:44.863366 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:14:44.863387 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:14:44.863396 kernel: BIOS-provided physical RAM map: Aug 13 01:14:44.863404 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:14:44.863410 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:14:44.863416 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:14:44.863422 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:14:44.863449 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:14:44.863455 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:14:44.863460 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:14:44.863466 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:14:44.863472 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:14:44.863480 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:14:44.863486 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:14:44.863492 kernel: NX (Execute Disable) protection: active Aug 13 01:14:44.863503 kernel: APIC: Static calls initialized Aug 13 01:14:44.863509 kernel: SMBIOS 2.8 present. Aug 13 01:14:44.863517 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:14:44.863523 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:14:44.863529 kernel: Hypervisor detected: KVM Aug 13 01:14:44.863535 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:14:44.863541 kernel: kvm-clock: using sched offset of 5673450295 cycles Aug 13 01:14:44.863548 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:14:44.863554 kernel: tsc: Detected 2000.002 MHz processor Aug 13 01:14:44.863561 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:14:44.864630 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:14:44.864639 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:14:44.864650 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:14:44.864657 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:14:44.864663 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:14:44.864669 kernel: Using GB pages for direct mapping Aug 13 01:14:44.864675 kernel: ACPI: Early table checksum verification disabled Aug 13 01:14:44.864681 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:14:44.864688 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:14:44.864694 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:14:44.864701 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:14:44.864709 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:14:44.864715 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:14:44.864722 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:14:44.864728 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:14:44.864738 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:14:44.864744 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:14:44.864753 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:14:44.864760 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:14:44.864766 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:14:44.864773 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:14:44.864779 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:14:44.864786 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:14:44.864792 kernel: No NUMA configuration found Aug 13 01:14:44.864798 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:14:44.864807 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 01:14:44.864814 kernel: Zone ranges: Aug 13 01:14:44.864820 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:14:44.864826 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:14:44.864833 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:14:44.864839 kernel: Device empty Aug 13 01:14:44.864846 kernel: Movable zone start for each node Aug 13 01:14:44.864852 kernel: Early memory node ranges Aug 13 01:14:44.864858 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:14:44.864864 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:14:44.864873 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:14:44.864880 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:14:44.864886 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:14:44.864892 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:14:44.864899 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:14:44.864905 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:14:44.864911 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:14:44.864918 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:14:44.864924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:14:44.864933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:14:44.864939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:14:44.864946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:14:44.864952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:14:44.864959 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:14:44.864965 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:14:44.864971 kernel: TSC deadline timer available Aug 13 01:14:44.864978 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:14:44.864984 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:14:44.864993 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:14:44.864999 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:14:44.865005 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:14:44.865012 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:14:44.865018 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:14:44.865024 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:14:44.865030 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:14:44.865037 kernel: kvm-guest: setup PV sched yield Aug 13 01:14:44.865043 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:14:44.865050 kernel: Booting paravirtualized kernel on KVM Aug 13 01:14:44.865058 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:14:44.865065 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:14:44.865072 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:14:44.865078 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:14:44.865084 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:14:44.865090 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:14:44.865097 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:14:44.865105 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:14:44.865114 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:14:44.865120 kernel: random: crng init done Aug 13 01:14:44.865127 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:14:44.865133 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:14:44.865140 kernel: Fallback order for Node 0: 0 Aug 13 01:14:44.865146 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:14:44.865153 kernel: Policy zone: Normal Aug 13 01:14:44.865159 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:14:44.865166 kernel: software IO TLB: area num 2. Aug 13 01:14:44.865174 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:14:44.865181 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:14:44.865187 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:14:44.865193 kernel: Dynamic Preempt: voluntary Aug 13 01:14:44.865200 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:14:44.865207 kernel: rcu: RCU event tracing is enabled. Aug 13 01:14:44.865213 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:14:44.865220 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:14:44.865227 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:14:44.865236 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:14:44.865242 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:14:44.865249 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:14:44.865255 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:14:44.865269 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:14:44.865278 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:14:44.865285 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:14:44.865292 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:14:44.865298 kernel: Console: colour VGA+ 80x25 Aug 13 01:14:44.865305 kernel: printk: legacy console [tty0] enabled Aug 13 01:14:44.865312 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:14:44.865321 kernel: ACPI: Core revision 20240827 Aug 13 01:14:44.865328 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:14:44.865334 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:14:44.865341 kernel: x2apic enabled Aug 13 01:14:44.865348 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:14:44.865357 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:14:44.865363 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:14:44.865370 kernel: kvm-guest: setup PV IPIs Aug 13 01:14:44.865377 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:14:44.865384 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns Aug 13 01:14:44.865390 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Aug 13 01:14:44.865397 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:14:44.865404 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:14:44.865411 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:14:44.865419 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:14:44.865426 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:14:44.865433 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:14:44.865440 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:14:44.865447 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:14:44.865454 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:14:44.865460 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:14:44.865468 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:14:44.865477 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:14:44.865483 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:14:44.865490 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:14:44.865497 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:14:44.865504 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:14:44.865510 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:14:44.865517 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:14:44.865524 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:14:44.865530 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:14:44.865539 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:14:44.865547 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:14:44.865553 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:14:44.865560 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:14:44.865581 kernel: landlock: Up and running. Aug 13 01:14:44.865587 kernel: SELinux: Initializing. Aug 13 01:14:44.865594 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:14:44.865601 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:14:44.865608 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:14:44.865617 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:14:44.865624 kernel: ... version: 0 Aug 13 01:14:44.865630 kernel: ... bit width: 48 Aug 13 01:14:44.865637 kernel: ... generic registers: 6 Aug 13 01:14:44.865644 kernel: ... value mask: 0000ffffffffffff Aug 13 01:14:44.865650 kernel: ... max period: 00007fffffffffff Aug 13 01:14:44.865657 kernel: ... fixed-purpose events: 0 Aug 13 01:14:44.865663 kernel: ... event mask: 000000000000003f Aug 13 01:14:44.865670 kernel: signal: max sigframe size: 3376 Aug 13 01:14:44.865679 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:14:44.865686 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:14:44.865693 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:14:44.865699 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:14:44.865706 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:14:44.865712 kernel: .... node #0, CPUs: #1 Aug 13 01:14:44.865719 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:14:44.865726 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 01:14:44.865733 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227296K reserved, 0K cma-reserved) Aug 13 01:14:44.865742 kernel: devtmpfs: initialized Aug 13 01:14:44.865749 kernel: x86/mm: Memory block size: 128MB Aug 13 01:14:44.865755 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:14:44.865762 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:14:44.865769 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:14:44.865776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:14:44.865782 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:14:44.865789 kernel: audit: type=2000 audit(1755047682.004:1): state=initialized audit_enabled=0 res=1 Aug 13 01:14:44.865796 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:14:44.865805 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:14:44.865811 kernel: cpuidle: using governor menu Aug 13 01:14:44.865818 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:14:44.865825 kernel: dca service started, version 1.12.1 Aug 13 01:14:44.865832 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:14:44.865838 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:14:44.865845 kernel: PCI: Using configuration type 1 for base access Aug 13 01:14:44.865852 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:14:44.865859 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:14:44.865868 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:14:44.865875 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:14:44.865881 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:14:44.865888 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:14:44.865895 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:14:44.865901 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:14:44.865908 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:14:44.865915 kernel: ACPI: Interpreter enabled Aug 13 01:14:44.865921 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:14:44.865930 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:14:44.865937 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:14:44.865944 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:14:44.865951 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:14:44.865957 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:14:44.866125 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:14:44.866239 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:14:44.866346 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:14:44.866359 kernel: PCI host bridge to bus 0000:00 Aug 13 01:14:44.866470 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:14:44.868436 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:14:44.868554 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:14:44.868846 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:14:44.868944 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:14:44.869039 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:14:44.869140 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:14:44.869268 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:14:44.869390 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:14:44.869499 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:14:44.870296 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:14:44.870415 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:14:44.870528 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:14:44.870707 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:14:44.870819 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:14:44.871099 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:14:44.871204 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:14:44.871318 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:14:44.871425 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:14:44.871538 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:14:44.871663 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:14:44.871905 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:14:44.872022 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:14:44.872129 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:14:44.872241 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:14:44.872352 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:14:44.872456 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:14:44.872592 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:14:44.872703 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:14:44.872713 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:14:44.872720 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:14:44.872727 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:14:44.872734 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:14:44.872745 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:14:44.872751 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:14:44.872758 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:14:44.872765 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:14:44.872771 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:14:44.872778 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:14:44.872785 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:14:44.872791 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:14:44.872798 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:14:44.872808 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:14:44.872815 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:14:44.872821 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:14:44.872828 kernel: iommu: Default domain type: Translated Aug 13 01:14:44.872835 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:14:44.872841 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:14:44.872848 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:14:44.872855 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:14:44.872861 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:14:44.872967 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:14:44.873071 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:14:44.873174 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:14:44.873184 kernel: vgaarb: loaded Aug 13 01:14:44.873191 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:14:44.873198 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:14:44.873205 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:14:44.873211 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:14:44.873222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:14:44.873229 kernel: pnp: PnP ACPI init Aug 13 01:14:44.873349 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:14:44.873360 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:14:44.873368 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:14:44.873375 kernel: NET: Registered PF_INET protocol family Aug 13 01:14:44.873381 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:14:44.873388 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:14:44.873398 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:14:44.873405 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:14:44.873412 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:14:44.873419 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:14:44.873425 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:14:44.873432 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:14:44.873439 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:14:44.873445 kernel: NET: Registered PF_XDP protocol family Aug 13 01:14:44.873544 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:14:44.873673 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:14:44.873771 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:14:44.873868 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:14:44.873963 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:14:44.874057 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:14:44.874066 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:14:44.874073 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:14:44.874080 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:14:44.874090 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns Aug 13 01:14:44.874097 kernel: Initialise system trusted keyrings Aug 13 01:14:44.874104 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:14:44.874111 kernel: Key type asymmetric registered Aug 13 01:14:44.874117 kernel: Asymmetric key parser 'x509' registered Aug 13 01:14:44.874124 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:14:44.874131 kernel: io scheduler mq-deadline registered Aug 13 01:14:44.874137 kernel: io scheduler kyber registered Aug 13 01:14:44.874144 kernel: io scheduler bfq registered Aug 13 01:14:44.874153 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:14:44.874161 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:14:44.874168 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:14:44.874174 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:14:44.874181 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:14:44.874188 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:14:44.874195 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:14:44.874202 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:14:44.874311 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:14:44.874325 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 13 01:14:44.874423 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:14:44.874522 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:14:44 UTC (1755047684) Aug 13 01:14:44.874649 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:14:44.874660 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:14:44.874667 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:14:44.874674 kernel: Segment Routing with IPv6 Aug 13 01:14:44.874680 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:14:44.874690 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:14:44.874697 kernel: Key type dns_resolver registered Aug 13 01:14:44.874704 kernel: IPI shorthand broadcast: enabled Aug 13 01:14:44.874711 kernel: sched_clock: Marking stable (2732004720, 215622564)->(2979255661, -31628377) Aug 13 01:14:44.874717 kernel: registered taskstats version 1 Aug 13 01:14:44.874724 kernel: Loading compiled-in X.509 certificates Aug 13 01:14:44.874731 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:14:44.874737 kernel: Demotion targets for Node 0: null Aug 13 01:14:44.874744 kernel: Key type .fscrypt registered Aug 13 01:14:44.874753 kernel: Key type fscrypt-provisioning registered Aug 13 01:14:44.874760 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:14:44.874767 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:14:44.874773 kernel: ima: No architecture policies found Aug 13 01:14:44.874780 kernel: clk: Disabling unused clocks Aug 13 01:14:44.874786 kernel: Warning: unable to open an initial console. Aug 13 01:14:44.874793 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:14:44.874800 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:14:44.874807 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:14:44.874816 kernel: Run /init as init process Aug 13 01:14:44.874823 kernel: with arguments: Aug 13 01:14:44.874829 kernel: /init Aug 13 01:14:44.874836 kernel: with environment: Aug 13 01:14:44.874843 kernel: HOME=/ Aug 13 01:14:44.874865 kernel: TERM=linux Aug 13 01:14:44.874874 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:14:44.874882 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:14:44.874895 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:14:44.874903 systemd[1]: Detected virtualization kvm. Aug 13 01:14:44.874911 systemd[1]: Detected architecture x86-64. Aug 13 01:14:44.874918 systemd[1]: Running in initrd. Aug 13 01:14:44.874925 systemd[1]: No hostname configured, using default hostname. Aug 13 01:14:44.874933 systemd[1]: Hostname set to . Aug 13 01:14:44.874941 systemd[1]: Initializing machine ID from random generator. Aug 13 01:14:44.874949 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:14:44.874959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:14:44.874966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:14:44.875195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:14:44.875207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:14:44.875215 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:14:44.875224 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:14:44.875232 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:14:44.875243 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:14:44.875250 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:14:44.875257 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:14:44.875265 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:14:44.875272 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:14:44.875280 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:14:44.875287 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:14:44.875295 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:14:44.875305 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:14:44.875313 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:14:44.875321 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:14:44.875328 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:14:44.875336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:14:44.875343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:14:44.875351 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:14:44.875361 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:14:44.875368 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:14:44.875376 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:14:44.875383 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:14:44.875391 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:14:44.875399 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:14:44.875406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:14:44.875416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:14:44.875424 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:14:44.875431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:14:44.875439 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:14:44.875450 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:14:44.875484 systemd-journald[205]: Collecting audit messages is disabled. Aug 13 01:14:44.875506 systemd-journald[205]: Journal started Aug 13 01:14:44.875525 systemd-journald[205]: Runtime Journal (/run/log/journal/17c8d0d6a6c84624919886efd26cefc8) is 8M, max 78.5M, 70.5M free. Aug 13 01:14:44.879589 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:14:44.861682 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:14:44.904592 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:14:44.906383 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:14:44.960841 kernel: Bridge firewalling registered Aug 13 01:14:44.912109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:14:44.962456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:14:44.963217 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:14:44.967426 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:14:44.970661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:14:44.977202 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:14:44.979102 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:14:44.987594 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:14:44.995398 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:14:44.996470 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:14:44.996969 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:14:45.000699 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:14:45.002099 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:14:45.020450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:14:45.032634 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:14:45.059107 systemd-resolved[245]: Positive Trust Anchors: Aug 13 01:14:45.059816 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:14:45.059846 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:14:45.064860 systemd-resolved[245]: Defaulting to hostname 'linux'. Aug 13 01:14:45.065847 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:14:45.066671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:14:45.122597 kernel: SCSI subsystem initialized Aug 13 01:14:45.131634 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:14:45.141603 kernel: iscsi: registered transport (tcp) Aug 13 01:14:45.160767 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:14:45.160801 kernel: QLogic iSCSI HBA Driver Aug 13 01:14:45.179805 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:14:45.195282 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:14:45.197980 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:14:45.246152 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:14:45.248678 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:14:45.295592 kernel: raid6: avx2x4 gen() 30999 MB/s Aug 13 01:14:45.312590 kernel: raid6: avx2x2 gen() 30574 MB/s Aug 13 01:14:45.330935 kernel: raid6: avx2x1 gen() 22091 MB/s Aug 13 01:14:45.330951 kernel: raid6: using algorithm avx2x4 gen() 30999 MB/s Aug 13 01:14:45.349908 kernel: raid6: .... xor() 5269 MB/s, rmw enabled Aug 13 01:14:45.349936 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:14:45.368598 kernel: xor: automatically using best checksumming function avx Aug 13 01:14:45.498606 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:14:45.506169 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:14:45.508883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:14:45.529750 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 01:14:45.534617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:14:45.537651 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:14:45.560844 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Aug 13 01:14:45.587407 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:14:45.589618 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:14:45.648316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:14:45.650981 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:14:45.707757 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:14:45.719598 kernel: libata version 3.00 loaded. Aug 13 01:14:45.726600 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:14:45.726764 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:14:45.732701 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:14:45.732858 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:14:45.732988 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:14:45.740591 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:14:45.740613 kernel: scsi host1: ahci Aug 13 01:14:45.741710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:14:45.741816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:14:45.743911 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:14:45.748661 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:14:45.748291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:14:45.751506 kernel: scsi host2: ahci Aug 13 01:14:45.754171 kernel: scsi host3: ahci Aug 13 01:14:45.754333 kernel: scsi host4: ahci Aug 13 01:14:45.755813 kernel: scsi host5: ahci Aug 13 01:14:45.767772 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 01:14:45.767821 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:14:45.901603 kernel: scsi host6: ahci Aug 13 01:14:45.904594 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 0 Aug 13 01:14:45.904635 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 0 Aug 13 01:14:45.904647 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 0 Aug 13 01:14:45.904658 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 0 Aug 13 01:14:45.904667 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 0 Aug 13 01:14:45.904688 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 0 Aug 13 01:14:45.988918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:14:46.217584 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:14:46.217641 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:14:46.217652 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:14:46.217662 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:14:46.219314 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:14:46.219799 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:14:46.227009 kernel: AES CTR mode by8 optimization enabled Aug 13 01:14:46.245509 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:14:46.251037 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:14:46.274911 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:14:46.275073 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:14:46.275207 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:14:46.286846 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:14:46.286869 kernel: GPT:9289727 != 9297919 Aug 13 01:14:46.286885 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:14:46.288263 kernel: GPT:9289727 != 9297919 Aug 13 01:14:46.289151 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:14:46.290515 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:14:46.293236 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:14:46.337969 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:14:46.358382 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:14:46.365800 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:14:46.366378 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:14:46.367954 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:14:46.376896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:14:46.379018 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:14:46.379628 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:14:46.380855 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:14:46.382750 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:14:46.385170 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:14:46.402410 disk-uuid[632]: Primary Header is updated. Aug 13 01:14:46.402410 disk-uuid[632]: Secondary Entries is updated. Aug 13 01:14:46.402410 disk-uuid[632]: Secondary Header is updated. Aug 13 01:14:46.407296 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:14:46.412602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:14:46.430594 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:14:47.431237 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:14:47.431296 disk-uuid[635]: The operation has completed successfully. Aug 13 01:14:47.481074 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:14:47.481198 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:14:47.504620 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:14:47.516608 sh[654]: Success Aug 13 01:14:47.534991 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:14:47.535035 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:14:47.535598 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:14:47.546595 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:14:47.587391 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:14:47.591639 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:14:47.598353 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:14:47.610612 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:14:47.613596 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (666) Aug 13 01:14:47.617870 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:14:47.617897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:14:47.617908 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:14:47.626900 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:14:47.627842 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:14:47.628702 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:14:47.629366 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:14:47.631963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:14:47.663593 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (701) Aug 13 01:14:47.667161 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:14:47.667185 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:14:47.669347 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:14:47.678791 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:14:47.679402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:14:47.681525 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:14:47.765291 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:14:47.769211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:14:47.787010 ignition[762]: Ignition 2.21.0 Aug 13 01:14:47.787021 ignition[762]: Stage: fetch-offline Aug 13 01:14:47.787048 ignition[762]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:14:47.787057 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:14:47.787132 ignition[762]: parsed url from cmdline: "" Aug 13 01:14:47.787136 ignition[762]: no config URL provided Aug 13 01:14:47.787141 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:14:47.787150 ignition[762]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:14:47.793235 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:14:47.787154 ignition[762]: failed to fetch config: resource requires networking Aug 13 01:14:47.787943 ignition[762]: Ignition finished successfully Aug 13 01:14:47.806266 systemd-networkd[840]: lo: Link UP Aug 13 01:14:47.806277 systemd-networkd[840]: lo: Gained carrier Aug 13 01:14:47.807724 systemd-networkd[840]: Enumeration completed Aug 13 01:14:47.808384 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:14:47.808686 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:14:47.808691 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:14:47.809524 systemd-networkd[840]: eth0: Link UP Aug 13 01:14:47.810039 systemd[1]: Reached target network.target - Network. Aug 13 01:14:47.810529 systemd-networkd[840]: eth0: Gained carrier Aug 13 01:14:47.810538 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:14:47.813327 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:14:47.833660 ignition[844]: Ignition 2.21.0 Aug 13 01:14:47.834232 ignition[844]: Stage: fetch Aug 13 01:14:47.834407 ignition[844]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:14:47.834419 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:14:47.834499 ignition[844]: parsed url from cmdline: "" Aug 13 01:14:47.834503 ignition[844]: no config URL provided Aug 13 01:14:47.834508 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:14:47.834516 ignition[844]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:14:47.834548 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:14:47.834803 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:14:48.035677 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:14:48.035820 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:14:48.358637 systemd-networkd[840]: eth0: DHCPv4 address 172.234.199.78/24, gateway 172.234.199.1 acquired from 23.40.197.114 Aug 13 01:14:48.436002 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:14:48.567501 ignition[844]: PUT result: OK Aug 13 01:14:48.567588 ignition[844]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:14:48.705285 ignition[844]: GET result: OK Aug 13 01:14:48.705363 ignition[844]: parsing config with SHA512: 9205eda92903b16a5e2b06162692c280ff7198acbde14d7a85e24854713e02418f6fbdf6b3bdbb30455eda2504aa402e96b25a71f0258223c9f637366ee9e816 Aug 13 01:14:48.709101 unknown[844]: fetched base config from "system" Aug 13 01:14:48.709221 unknown[844]: fetched base config from "system" Aug 13 01:14:48.709393 ignition[844]: fetch: fetch complete Aug 13 01:14:48.709227 unknown[844]: fetched user config from "akamai" Aug 13 01:14:48.709397 ignition[844]: fetch: fetch passed Aug 13 01:14:48.713204 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:14:48.709436 ignition[844]: Ignition finished successfully Aug 13 01:14:48.716671 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:14:48.759101 ignition[851]: Ignition 2.21.0 Aug 13 01:14:48.759115 ignition[851]: Stage: kargs Aug 13 01:14:48.759222 ignition[851]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:14:48.759232 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:14:48.761084 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:14:48.759665 ignition[851]: kargs: kargs passed Aug 13 01:14:48.759703 ignition[851]: Ignition finished successfully Aug 13 01:14:48.764682 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:14:48.788736 ignition[857]: Ignition 2.21.0 Aug 13 01:14:48.788750 ignition[857]: Stage: disks Aug 13 01:14:48.789082 ignition[857]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:14:48.789096 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:14:48.792119 ignition[857]: disks: disks passed Aug 13 01:14:48.792194 ignition[857]: Ignition finished successfully Aug 13 01:14:48.794490 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:14:48.795780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:14:48.796345 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:14:48.797541 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:14:48.798750 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:14:48.800094 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:14:48.802235 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:14:48.828021 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:14:48.830151 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:14:48.832511 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:14:48.932598 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:14:48.933097 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:14:48.934100 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:14:48.935742 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:14:48.938645 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:14:48.940019 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:14:48.940063 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:14:48.940085 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:14:48.950442 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:14:48.952223 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:14:48.959587 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (874) Aug 13 01:14:48.962621 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:14:48.966061 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:14:48.966084 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:14:48.969925 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:14:48.999468 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:14:49.004294 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:14:49.009002 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:14:49.012420 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:14:49.094279 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:14:49.096682 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:14:49.098388 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:14:49.110546 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:14:49.113601 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:14:49.129293 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:14:49.134554 ignition[987]: INFO : Ignition 2.21.0 Aug 13 01:14:49.136485 ignition[987]: INFO : Stage: mount Aug 13 01:14:49.136485 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:14:49.136485 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:14:49.136485 ignition[987]: INFO : mount: mount passed Aug 13 01:14:49.136485 ignition[987]: INFO : Ignition finished successfully Aug 13 01:14:49.138844 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:14:49.140925 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:14:49.732749 systemd-networkd[840]: eth0: Gained IPv6LL Aug 13 01:14:49.934704 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:14:49.955588 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (999) Aug 13 01:14:49.955616 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:14:49.957885 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:14:49.959617 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:14:49.964715 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:14:49.988986 ignition[1016]: INFO : Ignition 2.21.0 Aug 13 01:14:49.988986 ignition[1016]: INFO : Stage: files Aug 13 01:14:49.990397 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:14:49.990397 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:14:49.990397 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:14:49.990397 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:14:49.990397 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:14:49.994165 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:14:49.994165 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:14:49.994165 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:14:49.994165 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:14:49.992475 unknown[1016]: wrote ssh authorized keys file for user: core Aug 13 01:14:49.999986 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:14:49.999986 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:14:49.999986 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:14:49.999986 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:14:49.999986 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:14:49.999986 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:14:49.999986 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 01:14:50.285464 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Aug 13 01:14:50.587388 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:14:50.587388 ignition[1016]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Aug 13 01:14:50.589700 ignition[1016]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:14:50.589700 ignition[1016]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:14:50.589700 ignition[1016]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Aug 13 01:14:50.592878 ignition[1016]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:14:50.592878 ignition[1016]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:14:50.592878 ignition[1016]: INFO : files: files passed Aug 13 01:14:50.592878 ignition[1016]: INFO : Ignition finished successfully Aug 13 01:14:50.592764 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:14:50.596683 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:14:50.622599 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:14:50.629137 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:14:50.629861 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:14:50.649918 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:14:50.649918 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:14:50.651986 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:14:50.653798 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:14:50.656305 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:14:50.658107 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:14:50.717187 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:14:50.717965 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:14:50.718832 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:14:50.720058 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:14:50.721351 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:14:50.722085 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:14:50.756326 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:14:50.759996 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:14:50.778310 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:14:50.779488 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:14:50.780844 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:14:50.782055 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:14:50.782205 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:14:50.783521 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:14:50.784325 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:14:50.785540 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:14:50.786707 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:14:50.787847 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:14:50.789118 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:14:50.790403 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:14:50.791687 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:14:50.793175 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:14:50.794557 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:14:50.796040 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:14:50.797409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:14:50.797518 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:14:50.798923 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:14:50.799775 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:14:50.800839 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:14:50.800959 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:14:50.802148 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:14:50.802284 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:14:50.803833 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:14:50.803952 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:14:50.804749 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:14:50.804882 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:14:50.807660 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:14:50.810831 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:14:50.811771 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:14:50.813669 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:14:50.815748 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:14:50.815889 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:14:50.821769 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:14:50.821889 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:14:50.843057 ignition[1069]: INFO : Ignition 2.21.0 Aug 13 01:14:50.843057 ignition[1069]: INFO : Stage: umount Aug 13 01:14:50.843057 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:14:50.843057 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:14:50.842638 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:14:50.848543 ignition[1069]: INFO : umount: umount passed Aug 13 01:14:50.848543 ignition[1069]: INFO : Ignition finished successfully Aug 13 01:14:50.849664 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:14:50.849978 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:14:50.852676 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:14:50.852775 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:14:50.853437 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:14:50.853487 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:14:50.854098 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:14:50.854142 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:14:50.854790 systemd[1]: Stopped target network.target - Network. Aug 13 01:14:50.860805 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:14:50.860859 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:14:50.861405 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:14:50.862072 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:14:50.863627 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:14:50.864191 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:14:50.865907 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:14:50.867058 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:14:50.867103 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:14:50.868401 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:14:50.868439 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:14:50.869616 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:14:50.869666 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:14:50.870716 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:14:50.870759 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:14:50.872236 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:14:50.874061 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:14:50.876233 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:14:50.876335 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:14:50.877334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:14:50.877429 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:14:50.878222 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:14:50.878342 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:14:50.904367 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:14:50.904648 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:14:50.904775 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:14:50.907342 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:14:50.908540 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:14:50.909223 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:14:50.909262 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:14:50.911136 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:14:50.913064 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:14:50.913117 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:14:50.915133 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:14:50.915181 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:14:50.916780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:14:50.917045 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:14:50.918045 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:14:50.918092 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:14:50.919688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:14:50.933315 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:14:50.933380 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:14:50.935558 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:14:50.935698 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:14:50.938933 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:14:50.939111 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:14:50.940482 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:14:50.940552 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:14:50.941397 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:14:50.941433 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:14:50.942960 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:14:50.943008 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:14:50.944934 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:14:50.944979 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:14:50.946120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:14:50.946169 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:14:50.948662 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:14:50.949735 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:14:50.949787 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:14:50.952690 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:14:50.952920 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:14:50.954668 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 01:14:50.954713 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:14:50.956075 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:14:50.956118 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:14:50.956905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:14:50.956948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:14:50.959909 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:14:50.959964 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 01:14:50.960007 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:14:50.960051 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:14:50.966399 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:14:50.966502 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:14:50.967822 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:14:50.970599 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:14:50.995353 systemd[1]: Switching root. Aug 13 01:14:51.032423 systemd-journald[205]: Journal stopped Aug 13 01:14:52.044169 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Aug 13 01:14:52.044197 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:14:52.044209 kernel: SELinux: policy capability open_perms=1 Aug 13 01:14:52.044221 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:14:52.044229 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:14:52.044238 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:14:52.044247 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:14:52.044255 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:14:52.044265 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:14:52.044274 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:14:52.044285 kernel: audit: type=1403 audit(1755047691.148:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:14:52.044294 systemd[1]: Successfully loaded SELinux policy in 53.921ms. Aug 13 01:14:52.044305 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.491ms. Aug 13 01:14:52.044315 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:14:52.044326 systemd[1]: Detected virtualization kvm. Aug 13 01:14:52.044338 systemd[1]: Detected architecture x86-64. Aug 13 01:14:52.044347 systemd[1]: Detected first boot. Aug 13 01:14:52.044357 systemd[1]: Initializing machine ID from random generator. Aug 13 01:14:52.044367 zram_generator::config[1113]: No configuration found. Aug 13 01:14:52.044377 kernel: Guest personality initialized and is inactive Aug 13 01:14:52.044385 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:14:52.044394 kernel: Initialized host personality Aug 13 01:14:52.044405 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:14:52.044415 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:14:52.044425 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:14:52.044435 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:14:52.044445 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:14:52.044454 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:14:52.044466 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:14:52.044477 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:14:52.044487 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:14:52.044497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:14:52.044507 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:14:52.044516 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:14:52.044526 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:14:52.044536 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:14:52.044547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:14:52.044557 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:14:52.044609 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:14:52.044622 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:14:52.044636 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:14:52.044646 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:14:52.044656 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:14:52.044666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:14:52.044677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:14:52.044687 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:14:52.044697 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:14:52.044707 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:14:52.044716 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:14:52.044727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:14:52.044914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:14:52.044923 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:14:52.044935 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:14:52.044945 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:14:52.044954 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:14:52.044964 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:14:52.044974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:14:52.044986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:14:52.044996 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:14:52.045006 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:14:52.045016 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:14:52.045028 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:14:52.045038 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:14:52.045047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:14:52.045057 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:14:52.045069 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:14:52.045079 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:14:52.045089 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:14:52.045112 systemd[1]: Reached target machines.target - Containers. Aug 13 01:14:52.045123 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:14:52.045133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:14:52.045143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:14:52.045153 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:14:52.045165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:14:52.045175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:14:52.045185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:14:52.045194 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:14:52.045204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:14:52.045214 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:14:52.045224 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:14:52.045234 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:14:52.045244 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:14:52.045257 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:14:52.045268 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:14:52.045278 kernel: loop: module loaded Aug 13 01:14:52.045287 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:14:52.045297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:14:52.045307 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:14:52.045317 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:14:52.045327 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:14:52.045339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:14:52.045349 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:14:52.045358 systemd[1]: Stopped verity-setup.service. Aug 13 01:14:52.045369 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:14:52.045378 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:14:52.045409 systemd-journald[1190]: Collecting audit messages is disabled. Aug 13 01:14:52.045435 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:14:52.045445 kernel: ACPI: bus type drm_connector registered Aug 13 01:14:52.045455 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:14:52.045465 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:14:52.045475 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:14:52.045485 systemd-journald[1190]: Journal started Aug 13 01:14:52.045505 systemd-journald[1190]: Runtime Journal (/run/log/journal/950f2f838e8d4761bf1f5b35e1cd81ca) is 8M, max 78.5M, 70.5M free. Aug 13 01:14:51.704772 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:14:51.717202 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:14:51.717755 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:14:52.048610 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:14:52.050779 kernel: fuse: init (API version 7.41) Aug 13 01:14:52.052224 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:14:52.057773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:14:52.058655 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:14:52.058877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:14:52.059889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:14:52.060660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:14:52.062051 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:14:52.062237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:14:52.064162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:14:52.064806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:14:52.066329 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:14:52.067281 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:14:52.068893 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:14:52.069098 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:14:52.071006 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:14:52.073241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:14:52.074249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:14:52.075272 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:14:52.076249 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:14:52.092785 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:14:52.095680 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:14:52.098677 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:14:52.099292 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:14:52.099366 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:14:52.101430 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:14:52.110667 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:14:52.112477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:14:52.115168 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:14:52.122130 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:14:52.122892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:14:52.127092 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:14:52.128972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:14:52.131696 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:14:52.135007 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:14:52.138754 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:14:52.142367 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:14:52.144274 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:14:52.148654 systemd-journald[1190]: Time spent on flushing to /var/log/journal/950f2f838e8d4761bf1f5b35e1cd81ca is 39.368ms for 984 entries. Aug 13 01:14:52.148654 systemd-journald[1190]: System Journal (/var/log/journal/950f2f838e8d4761bf1f5b35e1cd81ca) is 8M, max 195.6M, 187.6M free. Aug 13 01:14:52.207696 systemd-journald[1190]: Received client request to flush runtime journal. Aug 13 01:14:52.207730 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 01:14:52.180056 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:14:52.180901 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:14:52.184663 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:14:52.210509 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:14:52.219643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:14:52.231773 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:14:52.244390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:14:52.246616 kernel: loop1: detected capacity change from 0 to 8 Aug 13 01:14:52.253304 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Aug 13 01:14:52.253318 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Aug 13 01:14:52.265644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:14:52.269158 kernel: loop2: detected capacity change from 0 to 224512 Aug 13 01:14:52.269372 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:14:52.276155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:14:52.312596 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 01:14:52.333615 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:14:52.337691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:14:52.348004 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 01:14:52.371697 kernel: loop5: detected capacity change from 0 to 8 Aug 13 01:14:52.372500 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Aug 13 01:14:52.372801 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Aug 13 01:14:52.379685 kernel: loop6: detected capacity change from 0 to 224512 Aug 13 01:14:52.379161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:14:52.401587 kernel: loop7: detected capacity change from 0 to 146240 Aug 13 01:14:52.419927 (sd-merge)[1262]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:14:52.420747 (sd-merge)[1262]: Merged extensions into '/usr'. Aug 13 01:14:52.427926 systemd[1]: Reload requested from client PID 1238 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:14:52.428010 systemd[1]: Reloading... Aug 13 01:14:52.531596 zram_generator::config[1290]: No configuration found. Aug 13 01:14:52.664914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:14:52.724250 ldconfig[1233]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:14:52.743164 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:14:52.743820 systemd[1]: Reloading finished in 315 ms. Aug 13 01:14:52.758717 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:14:52.761245 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:14:52.772690 systemd[1]: Starting ensure-sysext.service... Aug 13 01:14:52.776719 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:14:52.809548 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:14:52.809575 systemd[1]: Reloading... Aug 13 01:14:52.820488 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:14:52.820828 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:14:52.821158 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:14:52.821448 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:14:52.822327 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:14:52.825126 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Aug 13 01:14:52.825717 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Aug 13 01:14:52.834409 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:14:52.834425 systemd-tmpfiles[1334]: Skipping /boot Aug 13 01:14:52.849492 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:14:52.849508 systemd-tmpfiles[1334]: Skipping /boot Aug 13 01:14:52.892588 zram_generator::config[1361]: No configuration found. Aug 13 01:14:52.983675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:14:53.053954 systemd[1]: Reloading finished in 244 ms. Aug 13 01:14:53.069280 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:14:53.081528 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:14:53.089337 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:14:53.091765 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:14:53.099743 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:14:53.104231 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:14:53.106557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:14:53.113739 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:14:53.117553 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:14:53.117902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:14:53.120703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:14:53.124772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:14:53.129542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:14:53.130735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:14:53.130835 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:14:53.136756 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:14:53.137870 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:14:53.148134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:14:53.148327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:14:53.148510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:14:53.148664 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:14:53.148777 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:14:53.155837 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:14:53.156059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:14:53.161629 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:14:53.162413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:14:53.162756 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:14:53.162964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:14:53.170772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:14:53.181187 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Aug 13 01:14:53.186396 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:14:53.188704 systemd[1]: Finished ensure-sysext.service. Aug 13 01:14:53.191855 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:14:53.192881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:14:53.193786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:14:53.194711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:14:53.194894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:14:53.197088 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:14:53.197777 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:14:53.208821 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:14:53.211094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:14:53.214270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:14:53.215597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:14:53.220703 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:14:53.223172 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:14:53.231506 augenrules[1446]: No rules Aug 13 01:14:53.233183 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:14:53.233417 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:14:53.247113 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:14:53.248432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:14:53.250669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:14:53.256389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:14:53.277586 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:14:53.371970 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:14:53.438599 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:14:53.454636 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 13 01:14:53.479601 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:14:53.496582 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:14:53.499600 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:14:53.582978 systemd-networkd[1460]: lo: Link UP Aug 13 01:14:53.583271 systemd-networkd[1460]: lo: Gained carrier Aug 13 01:14:53.590883 systemd-networkd[1460]: Enumeration completed Aug 13 01:14:53.590979 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:14:53.591273 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:14:53.591277 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:14:53.591890 systemd-networkd[1460]: eth0: Link UP Aug 13 01:14:53.592081 systemd-networkd[1460]: eth0: Gained carrier Aug 13 01:14:53.592098 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:14:53.598730 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:14:53.602353 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:14:53.617021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:14:53.621777 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:14:53.632203 systemd-resolved[1409]: Positive Trust Anchors: Aug 13 01:14:53.632229 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:14:53.632271 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:14:53.636331 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:14:53.638241 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:14:53.643781 systemd-resolved[1409]: Defaulting to hostname 'linux'. Aug 13 01:14:53.647456 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:14:53.648221 systemd[1]: Reached target network.target - Network. Aug 13 01:14:53.648705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:14:53.649414 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:14:53.651988 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:14:53.652612 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:14:53.655454 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:14:53.655590 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:14:53.656310 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:14:53.656990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:14:53.657638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:14:53.658846 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:14:53.658879 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:14:53.660364 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:14:53.662707 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:14:53.666932 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:14:53.673096 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:14:53.675878 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:14:53.677480 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:14:53.685376 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:14:53.687049 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:14:53.689776 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:14:53.694209 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:14:53.695788 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:14:53.712993 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:14:53.714697 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:14:53.715272 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:14:53.715371 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:14:53.717806 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:14:53.722963 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:14:53.727810 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:14:53.732748 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:14:53.736816 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:14:53.748615 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:14:53.749634 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:14:53.751942 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:14:53.758769 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:14:53.763976 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:14:53.767498 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:14:53.784415 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:14:53.788285 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:14:53.789803 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:14:53.790812 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:14:53.793776 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:14:53.798664 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing passwd entry cache Aug 13 01:14:53.800402 jq[1522]: false Aug 13 01:14:53.802610 oslogin_cache_refresh[1524]: Refreshing passwd entry cache Aug 13 01:14:53.806323 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting users, quitting Aug 13 01:14:53.807552 oslogin_cache_refresh[1524]: Failure getting users, quitting Aug 13 01:14:53.808598 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:14:53.808598 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing group entry cache Aug 13 01:14:53.808598 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting groups, quitting Aug 13 01:14:53.808598 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:14:53.807639 oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:14:53.807677 oslogin_cache_refresh[1524]: Refreshing group entry cache Aug 13 01:14:53.808337 oslogin_cache_refresh[1524]: Failure getting groups, quitting Aug 13 01:14:53.808346 oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:14:53.812796 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:14:53.814345 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:14:53.814603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:14:53.815180 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:14:53.815453 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:14:53.819666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:14:53.819938 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:14:53.830059 extend-filesystems[1523]: Found /dev/sda6 Aug 13 01:14:53.842374 extend-filesystems[1523]: Found /dev/sda9 Aug 13 01:14:53.851205 extend-filesystems[1523]: Checking size of /dev/sda9 Aug 13 01:14:53.856884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:14:53.861728 update_engine[1532]: I20250813 01:14:53.858750 1532 main.cc:92] Flatcar Update Engine starting Aug 13 01:14:53.864702 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:14:53.865130 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:14:53.874074 jq[1534]: true Aug 13 01:14:53.889733 coreos-metadata[1519]: Aug 13 01:14:53.887 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:14:53.901687 extend-filesystems[1523]: Resized partition /dev/sda9 Aug 13 01:14:53.896936 (ntainerd)[1549]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:14:53.904436 extend-filesystems[1566]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:14:53.912872 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:14:53.912916 jq[1562]: true Aug 13 01:14:53.930803 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:14:53.940715 extend-filesystems[1566]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:14:53.940715 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:14:53.940715 extend-filesystems[1566]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:14:53.946490 extend-filesystems[1523]: Resized filesystem in /dev/sda9 Aug 13 01:14:53.941592 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:14:53.947694 dbus-daemon[1520]: [system] SELinux support is enabled Aug 13 01:14:53.941832 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:14:53.949035 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:14:53.953425 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:14:53.953479 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:14:53.955471 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:14:53.955489 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:14:53.965967 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:14:53.968766 update_engine[1532]: I20250813 01:14:53.966612 1532 update_check_scheduler.cc:74] Next update check in 11m25s Aug 13 01:14:53.977291 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:14:53.984583 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:14:53.984906 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:14:53.989677 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:14:53.989705 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:14:53.991279 systemd-logind[1528]: New seat seat0. Aug 13 01:14:53.992720 systemd[1]: Starting sshkeys.service... Aug 13 01:14:53.997081 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:14:54.068653 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:14:54.072704 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:14:54.139818 systemd-networkd[1460]: eth0: DHCPv4 address 172.234.199.78/24, gateway 172.234.199.1 acquired from 23.40.197.114 Aug 13 01:14:54.140431 dbus-daemon[1520]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1460 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:14:54.140759 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Aug 13 01:14:54.148457 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:14:54.188644 coreos-metadata[1597]: Aug 13 01:14:54.185 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:14:54.233199 containerd[1549]: time="2025-08-13T01:14:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:14:54.237324 containerd[1549]: time="2025-08-13T01:14:54.237045173Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:14:54.247782 containerd[1549]: time="2025-08-13T01:14:54.247747952Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.92µs" Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.247866772Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.247886382Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248050402Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248064412Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248084982Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248141141Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248152211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248397291Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248409121Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248418471Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248425591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:14:54.248809 containerd[1549]: time="2025-08-13T01:14:54.248510181Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:14:54.249019 containerd[1549]: time="2025-08-13T01:14:54.248746131Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:14:54.249019 containerd[1549]: time="2025-08-13T01:14:54.248774431Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:14:54.249019 containerd[1549]: time="2025-08-13T01:14:54.248783321Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:14:54.249247 containerd[1549]: time="2025-08-13T01:14:54.249229950Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:14:54.249614 containerd[1549]: time="2025-08-13T01:14:54.249597330Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:14:54.249778 containerd[1549]: time="2025-08-13T01:14:54.249762860Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:14:54.263597 containerd[1549]: time="2025-08-13T01:14:54.263536726Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:14:54.263634 containerd[1549]: time="2025-08-13T01:14:54.263621526Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:14:54.263669 containerd[1549]: time="2025-08-13T01:14:54.263636956Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:14:54.263669 containerd[1549]: time="2025-08-13T01:14:54.263649016Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:14:54.263669 containerd[1549]: time="2025-08-13T01:14:54.263663756Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:14:54.263718 containerd[1549]: time="2025-08-13T01:14:54.263673516Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:14:54.263718 containerd[1549]: time="2025-08-13T01:14:54.263688316Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:14:54.263718 containerd[1549]: time="2025-08-13T01:14:54.263700576Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:14:54.263718 containerd[1549]: time="2025-08-13T01:14:54.263710446Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:14:54.263718 containerd[1549]: time="2025-08-13T01:14:54.263719166Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:14:54.263797 containerd[1549]: time="2025-08-13T01:14:54.263728316Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:14:54.263797 containerd[1549]: time="2025-08-13T01:14:54.263760646Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:14:54.263916 containerd[1549]: time="2025-08-13T01:14:54.263892516Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:14:54.263936 containerd[1549]: time="2025-08-13T01:14:54.263920926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:14:54.263953 containerd[1549]: time="2025-08-13T01:14:54.263934996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:14:54.263953 containerd[1549]: time="2025-08-13T01:14:54.263945626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:14:54.263991 containerd[1549]: time="2025-08-13T01:14:54.263955806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:14:54.263991 containerd[1549]: time="2025-08-13T01:14:54.263966606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:14:54.263991 containerd[1549]: time="2025-08-13T01:14:54.263976336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:14:54.263991 containerd[1549]: time="2025-08-13T01:14:54.263985136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:14:54.264219 containerd[1549]: time="2025-08-13T01:14:54.263994436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:14:54.264219 containerd[1549]: time="2025-08-13T01:14:54.264004386Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:14:54.264219 containerd[1549]: time="2025-08-13T01:14:54.264017786Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:14:54.264270 containerd[1549]: time="2025-08-13T01:14:54.264252045Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:14:54.264270 containerd[1549]: time="2025-08-13T01:14:54.264265485Z" level=info msg="Start snapshots syncer" Aug 13 01:14:54.264303 containerd[1549]: time="2025-08-13T01:14:54.264290165Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:14:54.264584 containerd[1549]: time="2025-08-13T01:14:54.264535585Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:14:54.274577 containerd[1549]: time="2025-08-13T01:14:54.274419695Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:14:54.274730 containerd[1549]: time="2025-08-13T01:14:54.274702665Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:14:54.277785 containerd[1549]: time="2025-08-13T01:14:54.277739032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:14:54.277812 containerd[1549]: time="2025-08-13T01:14:54.277791562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:14:54.277812 containerd[1549]: time="2025-08-13T01:14:54.277804762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:14:54.277846 containerd[1549]: time="2025-08-13T01:14:54.277816022Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:14:54.277864 containerd[1549]: time="2025-08-13T01:14:54.277828042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:14:54.277864 containerd[1549]: time="2025-08-13T01:14:54.277858722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:14:54.277907 containerd[1549]: time="2025-08-13T01:14:54.277868802Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:14:54.277907 containerd[1549]: time="2025-08-13T01:14:54.277891592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:14:54.277907 containerd[1549]: time="2025-08-13T01:14:54.277901982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:14:54.277954 containerd[1549]: time="2025-08-13T01:14:54.277930422Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.277972842Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278044242Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278054732Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278063992Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278089752Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278100172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278115882Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278127912Z" level=info msg="runtime interface created" Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278133531Z" level=info msg="created NRI interface" Aug 13 01:14:54.279589 containerd[1549]: time="2025-08-13T01:14:54.278141611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:14:54.279758 containerd[1549]: time="2025-08-13T01:14:54.279616860Z" level=info msg="Connect containerd service" Aug 13 01:14:54.279758 containerd[1549]: time="2025-08-13T01:14:54.279652460Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:14:54.285745 containerd[1549]: time="2025-08-13T01:14:54.285715024Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:14:54.304831 coreos-metadata[1597]: Aug 13 01:14:54.304 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:14:54.305473 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:14:54.352727 locksmithd[1584]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:14:54.396041 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:14:54.396397 dbus-daemon[1520]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:14:54.397881 dbus-daemon[1520]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1602 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:14:54.399230 systemd-timesyncd[1443]: Contacted time server 102.129.185.135:123 (0.flatcar.pool.ntp.org). Aug 13 01:14:54.400227 systemd-timesyncd[1443]: Initial clock synchronization to Wed 2025-08-13 01:14:54.707975 UTC. Aug 13 01:14:54.405014 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:14:54.424795 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:14:54.441757 containerd[1549]: time="2025-08-13T01:14:54.441714398Z" level=info msg="Start subscribing containerd event" Aug 13 01:14:54.441814 containerd[1549]: time="2025-08-13T01:14:54.441767048Z" level=info msg="Start recovering state" Aug 13 01:14:54.441876 containerd[1549]: time="2025-08-13T01:14:54.441850778Z" level=info msg="Start event monitor" Aug 13 01:14:54.441876 containerd[1549]: time="2025-08-13T01:14:54.441871938Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:14:54.441920 containerd[1549]: time="2025-08-13T01:14:54.441879868Z" level=info msg="Start streaming server" Aug 13 01:14:54.441920 containerd[1549]: time="2025-08-13T01:14:54.441888698Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:14:54.441920 containerd[1549]: time="2025-08-13T01:14:54.441895518Z" level=info msg="runtime interface starting up..." Aug 13 01:14:54.441920 containerd[1549]: time="2025-08-13T01:14:54.441901058Z" level=info msg="starting plugins..." Aug 13 01:14:54.441920 containerd[1549]: time="2025-08-13T01:14:54.441913698Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:14:54.442286 containerd[1549]: time="2025-08-13T01:14:54.442265227Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:14:54.442348 containerd[1549]: time="2025-08-13T01:14:54.442321427Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:14:54.442397 containerd[1549]: time="2025-08-13T01:14:54.442379837Z" level=info msg="containerd successfully booted in 0.209595s" Aug 13 01:14:54.457810 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:14:54.464900 coreos-metadata[1597]: Aug 13 01:14:54.464 INFO Fetch successful Aug 13 01:14:54.471629 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:14:54.473219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:14:54.475740 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:14:54.475957 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:14:54.486676 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:14:54.491214 update-ssh-keys[1638]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:14:54.493318 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:14:54.495775 systemd[1]: Finished sshkeys.service. Aug 13 01:14:54.507450 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:14:54.510937 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:14:54.512894 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:14:54.514745 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:14:54.555260 polkitd[1635]: Started polkitd version 126 Aug 13 01:14:54.559931 polkitd[1635]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:14:54.560179 polkitd[1635]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:14:54.560214 polkitd[1635]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:14:54.560424 polkitd[1635]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:14:54.560443 polkitd[1635]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:14:54.560479 polkitd[1635]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:14:54.561229 polkitd[1635]: Finished loading, compiling and executing 2 rules Aug 13 01:14:54.561504 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:14:54.561754 dbus-daemon[1520]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:14:54.562322 polkitd[1635]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:14:54.571533 systemd-resolved[1409]: System hostname changed to '172-234-199-78'. Aug 13 01:14:54.571695 systemd-hostnamed[1602]: Hostname set to <172-234-199-78> (transient) Aug 13 01:14:54.897451 coreos-metadata[1519]: Aug 13 01:14:54.897 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:14:54.999173 coreos-metadata[1519]: Aug 13 01:14:54.999 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:14:55.218423 coreos-metadata[1519]: Aug 13 01:14:55.218 INFO Fetch successful Aug 13 01:14:55.218423 coreos-metadata[1519]: Aug 13 01:14:55.218 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:14:55.549432 coreos-metadata[1519]: Aug 13 01:14:55.549 INFO Fetch successful Aug 13 01:14:55.621812 systemd-networkd[1460]: eth0: Gained IPv6LL Aug 13 01:14:55.628934 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:14:55.631292 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:14:55.645481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:14:55.647056 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:14:55.672844 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:14:55.674661 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:14:55.676447 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:14:56.536675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:14:56.538297 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:14:56.539883 systemd[1]: Startup finished in 2.809s (kernel) + 6.504s (initrd) + 5.443s (userspace) = 14.757s. Aug 13 01:14:56.544051 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:14:57.038663 kubelet[1694]: E0813 01:14:57.038501 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:14:57.041886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:14:57.042070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:14:57.042442 systemd[1]: kubelet.service: Consumed 845ms CPU time, 265.5M memory peak. Aug 13 01:14:58.868520 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:14:58.869912 systemd[1]: Started sshd@0-172.234.199.78:22-147.75.109.163:35282.service - OpenSSH per-connection server daemon (147.75.109.163:35282). Aug 13 01:14:59.233114 sshd[1707]: Accepted publickey for core from 147.75.109.163 port 35282 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:14:59.234857 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:14:59.242256 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:14:59.243555 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:14:59.253643 systemd-logind[1528]: New session 1 of user core. Aug 13 01:14:59.266012 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:14:59.270182 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:14:59.284927 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:14:59.287908 systemd-logind[1528]: New session c1 of user core. Aug 13 01:14:59.436245 systemd[1711]: Queued start job for default target default.target. Aug 13 01:14:59.443127 systemd[1711]: Created slice app.slice - User Application Slice. Aug 13 01:14:59.443155 systemd[1711]: Reached target paths.target - Paths. Aug 13 01:14:59.443201 systemd[1711]: Reached target timers.target - Timers. Aug 13 01:14:59.444779 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:14:59.456865 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:14:59.457023 systemd[1711]: Reached target sockets.target - Sockets. Aug 13 01:14:59.457125 systemd[1711]: Reached target basic.target - Basic System. Aug 13 01:14:59.457233 systemd[1711]: Reached target default.target - Main User Target. Aug 13 01:14:59.457261 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:14:59.457382 systemd[1711]: Startup finished in 162ms. Aug 13 01:14:59.467825 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:14:59.729984 systemd[1]: Started sshd@1-172.234.199.78:22-147.75.109.163:35290.service - OpenSSH per-connection server daemon (147.75.109.163:35290). Aug 13 01:15:00.073553 sshd[1722]: Accepted publickey for core from 147.75.109.163 port 35290 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:00.075285 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:00.080139 systemd-logind[1528]: New session 2 of user core. Aug 13 01:15:00.085704 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:15:00.324671 sshd[1724]: Connection closed by 147.75.109.163 port 35290 Aug 13 01:15:00.325532 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:00.329383 systemd[1]: sshd@1-172.234.199.78:22-147.75.109.163:35290.service: Deactivated successfully. Aug 13 01:15:00.331453 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:15:00.333165 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:15:00.334851 systemd-logind[1528]: Removed session 2. Aug 13 01:15:00.386403 systemd[1]: Started sshd@2-172.234.199.78:22-147.75.109.163:35304.service - OpenSSH per-connection server daemon (147.75.109.163:35304). Aug 13 01:15:00.728555 sshd[1730]: Accepted publickey for core from 147.75.109.163 port 35304 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:00.730059 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:00.734428 systemd-logind[1528]: New session 3 of user core. Aug 13 01:15:00.745726 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:15:00.977543 sshd[1732]: Connection closed by 147.75.109.163 port 35304 Aug 13 01:15:00.978138 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:00.985210 systemd[1]: sshd@2-172.234.199.78:22-147.75.109.163:35304.service: Deactivated successfully. Aug 13 01:15:00.985487 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:15:00.986864 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:15:00.988421 systemd-logind[1528]: Removed session 3. Aug 13 01:15:01.035354 systemd[1]: Started sshd@3-172.234.199.78:22-147.75.109.163:35308.service - OpenSSH per-connection server daemon (147.75.109.163:35308). Aug 13 01:15:01.376039 sshd[1738]: Accepted publickey for core from 147.75.109.163 port 35308 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:01.377199 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:01.381625 systemd-logind[1528]: New session 4 of user core. Aug 13 01:15:01.388785 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:15:01.624690 sshd[1740]: Connection closed by 147.75.109.163 port 35308 Aug 13 01:15:01.625297 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:01.628988 systemd[1]: sshd@3-172.234.199.78:22-147.75.109.163:35308.service: Deactivated successfully. Aug 13 01:15:01.630490 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:15:01.631196 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:15:01.632465 systemd-logind[1528]: Removed session 4. Aug 13 01:15:01.687033 systemd[1]: Started sshd@4-172.234.199.78:22-147.75.109.163:35312.service - OpenSSH per-connection server daemon (147.75.109.163:35312). Aug 13 01:15:02.033093 sshd[1746]: Accepted publickey for core from 147.75.109.163 port 35312 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:02.034922 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:02.041282 systemd-logind[1528]: New session 5 of user core. Aug 13 01:15:02.050712 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:15:02.245119 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:15:02.245429 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:15:02.263442 sudo[1749]: pam_unix(sudo:session): session closed for user root Aug 13 01:15:02.314985 sshd[1748]: Connection closed by 147.75.109.163 port 35312 Aug 13 01:15:02.315879 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:02.319548 systemd[1]: sshd@4-172.234.199.78:22-147.75.109.163:35312.service: Deactivated successfully. Aug 13 01:15:02.321725 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:15:02.323912 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:15:02.325076 systemd-logind[1528]: Removed session 5. Aug 13 01:15:02.383155 systemd[1]: Started sshd@5-172.234.199.78:22-147.75.109.163:35324.service - OpenSSH per-connection server daemon (147.75.109.163:35324). Aug 13 01:15:02.738255 sshd[1755]: Accepted publickey for core from 147.75.109.163 port 35324 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:02.739442 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:02.743632 systemd-logind[1528]: New session 6 of user core. Aug 13 01:15:02.750689 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:15:02.941072 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:15:02.941369 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:15:02.946536 sudo[1759]: pam_unix(sudo:session): session closed for user root Aug 13 01:15:02.952628 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:15:02.952914 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:15:02.962715 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:15:02.998093 augenrules[1781]: No rules Aug 13 01:15:03.000229 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:15:03.000488 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:15:03.001900 sudo[1758]: pam_unix(sudo:session): session closed for user root Aug 13 01:15:03.055937 sshd[1757]: Connection closed by 147.75.109.163 port 35324 Aug 13 01:15:03.056543 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:03.061085 systemd[1]: sshd@5-172.234.199.78:22-147.75.109.163:35324.service: Deactivated successfully. Aug 13 01:15:03.062952 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:15:03.064415 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:15:03.065731 systemd-logind[1528]: Removed session 6. Aug 13 01:15:03.118982 systemd[1]: Started sshd@6-172.234.199.78:22-147.75.109.163:35330.service - OpenSSH per-connection server daemon (147.75.109.163:35330). Aug 13 01:15:03.473114 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 35330 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:03.474661 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:03.479039 systemd-logind[1528]: New session 7 of user core. Aug 13 01:15:03.484698 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:15:03.677981 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:15:03.678268 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:15:04.222988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:15:04.223213 systemd[1]: kubelet.service: Consumed 845ms CPU time, 265.5M memory peak. Aug 13 01:15:04.226115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:15:04.251939 systemd[1]: Reload requested from client PID 1825 ('systemctl') (unit session-7.scope)... Aug 13 01:15:04.252019 systemd[1]: Reloading... Aug 13 01:15:04.337750 zram_generator::config[1864]: No configuration found. Aug 13 01:15:04.461552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:15:04.568277 systemd[1]: Reloading finished in 315 ms. Aug 13 01:15:04.632046 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:15:04.632142 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:15:04.632646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:15:04.632681 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.3M memory peak. Aug 13 01:15:04.634316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:15:04.803842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:15:04.811023 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:15:04.850743 kubelet[1922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:15:04.850988 kubelet[1922]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:15:04.851042 kubelet[1922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:15:04.851148 kubelet[1922]: I0813 01:15:04.851126 1922 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:15:05.245748 kubelet[1922]: I0813 01:15:05.243697 1922 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:15:05.245748 kubelet[1922]: I0813 01:15:05.244048 1922 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:15:05.245748 kubelet[1922]: I0813 01:15:05.244415 1922 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:15:05.276179 kubelet[1922]: I0813 01:15:05.275989 1922 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:15:05.283360 kubelet[1922]: I0813 01:15:05.283344 1922 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:15:05.286601 kubelet[1922]: I0813 01:15:05.286429 1922 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:15:05.288806 kubelet[1922]: I0813 01:15:05.288773 1922 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:15:05.288963 kubelet[1922]: I0813 01:15:05.288802 1922 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"192.168.178.99","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:15:05.289056 kubelet[1922]: I0813 01:15:05.288967 1922 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:15:05.289056 kubelet[1922]: I0813 01:15:05.288975 1922 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:15:05.289102 kubelet[1922]: I0813 01:15:05.289088 1922 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:15:05.293495 kubelet[1922]: I0813 01:15:05.293200 1922 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:15:05.293495 kubelet[1922]: I0813 01:15:05.293225 1922 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:15:05.293495 kubelet[1922]: I0813 01:15:05.293244 1922 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:15:05.293495 kubelet[1922]: I0813 01:15:05.293254 1922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:15:05.294125 kubelet[1922]: E0813 01:15:05.293826 1922 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:05.294125 kubelet[1922]: E0813 01:15:05.293870 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:05.295415 kubelet[1922]: I0813 01:15:05.295401 1922 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:15:05.295815 kubelet[1922]: I0813 01:15:05.295800 1922 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:15:05.296395 kubelet[1922]: W0813 01:15:05.296375 1922 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:15:05.298116 kubelet[1922]: I0813 01:15:05.298100 1922 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:15:05.298154 kubelet[1922]: I0813 01:15:05.298129 1922 server.go:1287] "Started kubelet" Aug 13 01:15:05.299102 kubelet[1922]: I0813 01:15:05.298206 1922 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:15:05.299102 kubelet[1922]: I0813 01:15:05.298936 1922 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:15:05.302401 kubelet[1922]: I0813 01:15:05.302240 1922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:15:05.303836 kubelet[1922]: I0813 01:15:05.303810 1922 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:15:05.305763 kubelet[1922]: I0813 01:15:05.305748 1922 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:15:05.305928 kubelet[1922]: I0813 01:15:05.305880 1922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:15:05.306023 kubelet[1922]: E0813 01:15:05.306007 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:05.306089 kubelet[1922]: I0813 01:15:05.306071 1922 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:15:05.307939 kubelet[1922]: I0813 01:15:05.307901 1922 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:15:05.308104 kubelet[1922]: I0813 01:15:05.308075 1922 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:15:05.312023 kubelet[1922]: I0813 01:15:05.312007 1922 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:15:05.312161 kubelet[1922]: I0813 01:15:05.312144 1922 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:15:05.316820 kubelet[1922]: E0813 01:15:05.316801 1922 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"192.168.178.99\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Aug 13 01:15:05.318869 kubelet[1922]: W0813 01:15:05.318850 1922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "192.168.178.99" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 01:15:05.318919 kubelet[1922]: E0813 01:15:05.318886 1922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"192.168.178.99\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Aug 13 01:15:05.319677 kubelet[1922]: E0813 01:15:05.319421 1922 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:15:05.319924 kubelet[1922]: I0813 01:15:05.319901 1922 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:15:05.341494 kubelet[1922]: I0813 01:15:05.341313 1922 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:15:05.341494 kubelet[1922]: I0813 01:15:05.341325 1922 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:15:05.341494 kubelet[1922]: I0813 01:15:05.341338 1922 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:15:05.342688 kubelet[1922]: I0813 01:15:05.342642 1922 policy_none.go:49] "None policy: Start" Aug 13 01:15:05.342688 kubelet[1922]: I0813 01:15:05.342678 1922 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:15:05.342742 kubelet[1922]: I0813 01:15:05.342693 1922 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:15:05.349053 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:15:05.360389 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:15:05.365994 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:15:05.374149 kubelet[1922]: I0813 01:15:05.373797 1922 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:15:05.375324 kubelet[1922]: I0813 01:15:05.375304 1922 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:15:05.375370 kubelet[1922]: I0813 01:15:05.375321 1922 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:15:05.375639 kubelet[1922]: I0813 01:15:05.375620 1922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:15:05.377852 kubelet[1922]: E0813 01:15:05.377550 1922 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:15:05.377852 kubelet[1922]: E0813 01:15:05.377800 1922 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"192.168.178.99\" not found" Aug 13 01:15:05.379750 kubelet[1922]: I0813 01:15:05.379732 1922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:15:05.381071 kubelet[1922]: I0813 01:15:05.381031 1922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:15:05.381318 kubelet[1922]: I0813 01:15:05.381297 1922 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:15:05.381403 kubelet[1922]: I0813 01:15:05.381393 1922 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:15:05.381470 kubelet[1922]: I0813 01:15:05.381461 1922 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:15:05.382466 kubelet[1922]: E0813 01:15:05.382391 1922 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 01:15:05.383456 kubelet[1922]: W0813 01:15:05.383364 1922 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Aug 13 01:15:05.383527 kubelet[1922]: E0813 01:15:05.383393 1922 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Aug 13 01:15:05.478471 kubelet[1922]: I0813 01:15:05.478416 1922 kubelet_node_status.go:75] "Attempting to register node" node="192.168.178.99" Aug 13 01:15:05.489656 kubelet[1922]: I0813 01:15:05.489637 1922 kubelet_node_status.go:78] "Successfully registered node" node="192.168.178.99" Aug 13 01:15:05.489729 kubelet[1922]: E0813 01:15:05.489658 1922 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"192.168.178.99\": node \"192.168.178.99\" not found" Aug 13 01:15:05.510559 kubelet[1922]: E0813 01:15:05.510492 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:05.611924 kubelet[1922]: E0813 01:15:05.611668 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:05.712778 kubelet[1922]: E0813 01:15:05.712471 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:05.813661 kubelet[1922]: E0813 01:15:05.813457 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:05.914716 kubelet[1922]: E0813 01:15:05.914646 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:06.015954 kubelet[1922]: E0813 01:15:06.015701 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:06.117218 kubelet[1922]: E0813 01:15:06.117037 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:06.218172 kubelet[1922]: E0813 01:15:06.218103 1922 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.178.99\" not found" Aug 13 01:15:06.247318 kubelet[1922]: I0813 01:15:06.247282 1922 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 01:15:06.294556 kubelet[1922]: I0813 01:15:06.294345 1922 apiserver.go:52] "Watching apiserver" Aug 13 01:15:06.294556 kubelet[1922]: E0813 01:15:06.294458 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:06.300190 kubelet[1922]: E0813 01:15:06.299232 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdzq" podUID="706cc235-88f9-4461-aacc-ad5d00a0de1c" Aug 13 01:15:06.310094 systemd[1]: Created slice kubepods-besteffort-pod738b5758_966c_4772_978d_efead35f1ba9.slice - libcontainer container kubepods-besteffort-pod738b5758_966c_4772_978d_efead35f1ba9.slice. Aug 13 01:15:06.311611 kubelet[1922]: I0813 01:15:06.311547 1922 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:15:06.318291 kubelet[1922]: I0813 01:15:06.318249 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-cni-net-dir\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318354 kubelet[1922]: I0813 01:15:06.318296 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-lib-modules\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318354 kubelet[1922]: I0813 01:15:06.318319 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/738b5758-966c-4772-978d-efead35f1ba9-node-certs\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318354 kubelet[1922]: I0813 01:15:06.318338 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-xtables-lock\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318414 kubelet[1922]: I0813 01:15:06.318359 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/706cc235-88f9-4461-aacc-ad5d00a0de1c-kubelet-dir\") pod \"csi-node-driver-qpdzq\" (UID: \"706cc235-88f9-4461-aacc-ad5d00a0de1c\") " pod="calico-system/csi-node-driver-qpdzq" Aug 13 01:15:06.318414 kubelet[1922]: I0813 01:15:06.318385 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2f9b\" (UniqueName: \"kubernetes.io/projected/706cc235-88f9-4461-aacc-ad5d00a0de1c-kube-api-access-h2f9b\") pod \"csi-node-driver-qpdzq\" (UID: \"706cc235-88f9-4461-aacc-ad5d00a0de1c\") " pod="calico-system/csi-node-driver-qpdzq" Aug 13 01:15:06.318414 kubelet[1922]: I0813 01:15:06.318404 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2168382f-4ba6-4f52-b989-8c3df1cc368d-kube-proxy\") pod \"kube-proxy-kdpw8\" (UID: \"2168382f-4ba6-4f52-b989-8c3df1cc368d\") " pod="kube-system/kube-proxy-kdpw8" Aug 13 01:15:06.318478 kubelet[1922]: I0813 01:15:06.318420 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2168382f-4ba6-4f52-b989-8c3df1cc368d-xtables-lock\") pod \"kube-proxy-kdpw8\" (UID: \"2168382f-4ba6-4f52-b989-8c3df1cc368d\") " pod="kube-system/kube-proxy-kdpw8" Aug 13 01:15:06.318478 kubelet[1922]: I0813 01:15:06.318438 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75bda629-afc2-444d-b018-fc87852383ca-var-lib-calico\") pod \"tigera-operator-747864d56d-v7fpp\" (UID: \"75bda629-afc2-444d-b018-fc87852383ca\") " pod="tigera-operator/tigera-operator-747864d56d-v7fpp" Aug 13 01:15:06.318478 kubelet[1922]: I0813 01:15:06.318455 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-cni-bin-dir\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318478 kubelet[1922]: I0813 01:15:06.318470 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4w7\" (UniqueName: \"kubernetes.io/projected/738b5758-966c-4772-978d-efead35f1ba9-kube-api-access-fx4w7\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318560 kubelet[1922]: I0813 01:15:06.318486 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/706cc235-88f9-4461-aacc-ad5d00a0de1c-registration-dir\") pod \"csi-node-driver-qpdzq\" (UID: \"706cc235-88f9-4461-aacc-ad5d00a0de1c\") " pod="calico-system/csi-node-driver-qpdzq" Aug 13 01:15:06.318560 kubelet[1922]: I0813 01:15:06.318511 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/706cc235-88f9-4461-aacc-ad5d00a0de1c-varrun\") pod \"csi-node-driver-qpdzq\" (UID: \"706cc235-88f9-4461-aacc-ad5d00a0de1c\") " pod="calico-system/csi-node-driver-qpdzq" Aug 13 01:15:06.318560 kubelet[1922]: I0813 01:15:06.318528 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26fxs\" (UniqueName: \"kubernetes.io/projected/2168382f-4ba6-4f52-b989-8c3df1cc368d-kube-api-access-26fxs\") pod \"kube-proxy-kdpw8\" (UID: \"2168382f-4ba6-4f52-b989-8c3df1cc368d\") " pod="kube-system/kube-proxy-kdpw8" Aug 13 01:15:06.318560 kubelet[1922]: I0813 01:15:06.318544 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wsrx\" (UniqueName: \"kubernetes.io/projected/75bda629-afc2-444d-b018-fc87852383ca-kube-api-access-4wsrx\") pod \"tigera-operator-747864d56d-v7fpp\" (UID: \"75bda629-afc2-444d-b018-fc87852383ca\") " pod="tigera-operator/tigera-operator-747864d56d-v7fpp" Aug 13 01:15:06.318560 kubelet[1922]: I0813 01:15:06.318561 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-cni-log-dir\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318681 kubelet[1922]: I0813 01:15:06.318593 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-flexvol-driver-host\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318681 kubelet[1922]: I0813 01:15:06.318612 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-policysync\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318681 kubelet[1922]: I0813 01:15:06.318645 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/738b5758-966c-4772-978d-efead35f1ba9-tigera-ca-bundle\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318681 kubelet[1922]: I0813 01:15:06.318659 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-var-lib-calico\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318681 kubelet[1922]: I0813 01:15:06.318678 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/738b5758-966c-4772-978d-efead35f1ba9-var-run-calico\") pod \"calico-node-rfw6k\" (UID: \"738b5758-966c-4772-978d-efead35f1ba9\") " pod="calico-system/calico-node-rfw6k" Aug 13 01:15:06.318771 kubelet[1922]: I0813 01:15:06.318722 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/706cc235-88f9-4461-aacc-ad5d00a0de1c-socket-dir\") pod \"csi-node-driver-qpdzq\" (UID: \"706cc235-88f9-4461-aacc-ad5d00a0de1c\") " pod="calico-system/csi-node-driver-qpdzq" Aug 13 01:15:06.318771 kubelet[1922]: I0813 01:15:06.318748 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2168382f-4ba6-4f52-b989-8c3df1cc368d-lib-modules\") pod \"kube-proxy-kdpw8\" (UID: \"2168382f-4ba6-4f52-b989-8c3df1cc368d\") " pod="kube-system/kube-proxy-kdpw8" Aug 13 01:15:06.322628 kubelet[1922]: I0813 01:15:06.322458 1922 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Aug 13 01:15:06.323688 containerd[1549]: time="2025-08-13T01:15:06.323594669Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:15:06.324602 kubelet[1922]: I0813 01:15:06.324350 1922 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Aug 13 01:15:06.330531 systemd[1]: Created slice kubepods-besteffort-pod75bda629_afc2_444d_b018_fc87852383ca.slice - libcontainer container kubepods-besteffort-pod75bda629_afc2_444d_b018_fc87852383ca.slice. Aug 13 01:15:06.339379 sudo[1793]: pam_unix(sudo:session): session closed for user root Aug 13 01:15:06.342597 systemd[1]: Created slice kubepods-besteffort-pod2168382f_4ba6_4f52_b989_8c3df1cc368d.slice - libcontainer container kubepods-besteffort-pod2168382f_4ba6_4f52_b989_8c3df1cc368d.slice. Aug 13 01:15:06.391848 sshd[1792]: Connection closed by 147.75.109.163 port 35330 Aug 13 01:15:06.392765 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:06.396080 systemd[1]: sshd@6-172.234.199.78:22-147.75.109.163:35330.service: Deactivated successfully. Aug 13 01:15:06.398301 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:15:06.398513 systemd[1]: session-7.scope: Consumed 414ms CPU time, 70.1M memory peak. Aug 13 01:15:06.401443 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:15:06.402877 systemd-logind[1528]: Removed session 7. Aug 13 01:15:06.422282 kubelet[1922]: E0813 01:15:06.422245 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.422418 kubelet[1922]: W0813 01:15:06.422261 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.422418 kubelet[1922]: E0813 01:15:06.422367 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.423666 kubelet[1922]: E0813 01:15:06.423550 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.425630 kubelet[1922]: W0813 01:15:06.425606 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.425667 kubelet[1922]: E0813 01:15:06.425635 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.426024 kubelet[1922]: E0813 01:15:06.425992 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.426297 kubelet[1922]: W0813 01:15:06.426277 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.426900 kubelet[1922]: E0813 01:15:06.426872 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.426900 kubelet[1922]: W0813 01:15:06.426887 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.427126 kubelet[1922]: E0813 01:15:06.427108 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.427163 kubelet[1922]: E0813 01:15:06.427143 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.427878 kubelet[1922]: E0813 01:15:06.427860 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.427878 kubelet[1922]: W0813 01:15:06.427875 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.427949 kubelet[1922]: E0813 01:15:06.427888 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.428425 kubelet[1922]: E0813 01:15:06.428402 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.428425 kubelet[1922]: W0813 01:15:06.428418 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.428481 kubelet[1922]: E0813 01:15:06.428428 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.434398 kubelet[1922]: E0813 01:15:06.434359 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.434398 kubelet[1922]: W0813 01:15:06.434373 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.435543 kubelet[1922]: E0813 01:15:06.435226 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.445917 kubelet[1922]: E0813 01:15:06.443643 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.445917 kubelet[1922]: W0813 01:15:06.443662 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.445917 kubelet[1922]: E0813 01:15:06.443673 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.447539 kubelet[1922]: E0813 01:15:06.447512 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.447539 kubelet[1922]: W0813 01:15:06.447530 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.447539 kubelet[1922]: E0813 01:15:06.447541 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.452594 kubelet[1922]: E0813 01:15:06.452369 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:06.452657 kubelet[1922]: W0813 01:15:06.452645 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:06.452703 kubelet[1922]: E0813 01:15:06.452692 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:06.622804 containerd[1549]: time="2025-08-13T01:15:06.622771549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rfw6k,Uid:738b5758-966c-4772-978d-efead35f1ba9,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:06.636619 containerd[1549]: time="2025-08-13T01:15:06.635922468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-v7fpp,Uid:75bda629-afc2-444d-b018-fc87852383ca,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:15:06.646011 kubelet[1922]: E0813 01:15:06.645601 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:06.648646 containerd[1549]: time="2025-08-13T01:15:06.648534703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdpw8,Uid:2168382f-4ba6-4f52-b989-8c3df1cc368d,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:07.295725 kubelet[1922]: E0813 01:15:07.295657 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:07.382267 kubelet[1922]: E0813 01:15:07.382240 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdzq" podUID="706cc235-88f9-4461-aacc-ad5d00a0de1c" Aug 13 01:15:07.392323 containerd[1549]: time="2025-08-13T01:15:07.392293511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:15:07.397452 containerd[1549]: time="2025-08-13T01:15:07.397421832Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:15:07.398356 containerd[1549]: time="2025-08-13T01:15:07.398335491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:15:07.398865 containerd[1549]: time="2025-08-13T01:15:07.398838713Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Aug 13 01:15:07.399946 containerd[1549]: time="2025-08-13T01:15:07.399921748Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:15:07.401343 containerd[1549]: time="2025-08-13T01:15:07.401316921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:15:07.401941 containerd[1549]: time="2025-08-13T01:15:07.401901893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Aug 13 01:15:07.403830 containerd[1549]: time="2025-08-13T01:15:07.403781886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:15:07.404425 containerd[1549]: time="2025-08-13T01:15:07.404403079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 751.069561ms" Aug 13 01:15:07.405421 containerd[1549]: time="2025-08-13T01:15:07.405253953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 777.645564ms" Aug 13 01:15:07.407086 containerd[1549]: time="2025-08-13T01:15:07.407065770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 767.533133ms" Aug 13 01:15:07.426750 containerd[1549]: time="2025-08-13T01:15:07.426674781Z" level=info msg="connecting to shim f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647" address="unix:///run/containerd/s/557bb1a208b312b579e00087ae8dab8b1074424aff3a40f21bc9087fa8de74f3" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:07.437152 containerd[1549]: time="2025-08-13T01:15:07.435915699Z" level=info msg="connecting to shim f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc" address="unix:///run/containerd/s/87312d5be0c14fae4266b52c10280c806a796d54a9ead695f1554cdec88c4115" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:07.439584 containerd[1549]: time="2025-08-13T01:15:07.439232688Z" level=info msg="connecting to shim 8ecd7f008a24ddb766d8d7732ad184101c8176adffd98db14a2f4394f11fa3e4" address="unix:///run/containerd/s/1606049c7484a6fafabae7dc7695045a95a42d54beca239d844615257143687d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:07.472920 systemd[1]: Started cri-containerd-f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647.scope - libcontainer container f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647. Aug 13 01:15:07.491690 systemd[1]: Started cri-containerd-8ecd7f008a24ddb766d8d7732ad184101c8176adffd98db14a2f4394f11fa3e4.scope - libcontainer container 8ecd7f008a24ddb766d8d7732ad184101c8176adffd98db14a2f4394f11fa3e4. Aug 13 01:15:07.495761 systemd[1]: Started cri-containerd-f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc.scope - libcontainer container f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc. Aug 13 01:15:07.546657 containerd[1549]: time="2025-08-13T01:15:07.546523182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdpw8,Uid:2168382f-4ba6-4f52-b989-8c3df1cc368d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ecd7f008a24ddb766d8d7732ad184101c8176adffd98db14a2f4394f11fa3e4\"" Aug 13 01:15:07.549916 kubelet[1922]: E0813 01:15:07.549876 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:07.550531 containerd[1549]: time="2025-08-13T01:15:07.550392915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rfw6k,Uid:738b5758-966c-4772-978d-efead35f1ba9,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc\"" Aug 13 01:15:07.551615 containerd[1549]: time="2025-08-13T01:15:07.551538795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-v7fpp,Uid:75bda629-afc2-444d-b018-fc87852383ca,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\"" Aug 13 01:15:07.553044 containerd[1549]: time="2025-08-13T01:15:07.552871092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 01:15:08.296353 kubelet[1922]: E0813 01:15:08.296312 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:08.728601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2281989511.mount: Deactivated successfully. Aug 13 01:15:09.051689 containerd[1549]: time="2025-08-13T01:15:09.051580143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:09.052608 containerd[1549]: time="2025-08-13T01:15:09.052509340Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 01:15:09.053089 containerd[1549]: time="2025-08-13T01:15:09.053052867Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:09.054220 containerd[1549]: time="2025-08-13T01:15:09.054187530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:09.054715 containerd[1549]: time="2025-08-13T01:15:09.054687911Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.501787012s" Aug 13 01:15:09.054757 containerd[1549]: time="2025-08-13T01:15:09.054716862Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 01:15:09.055769 containerd[1549]: time="2025-08-13T01:15:09.055744875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:15:09.058616 containerd[1549]: time="2025-08-13T01:15:09.056970607Z" level=info msg="CreateContainer within sandbox \"8ecd7f008a24ddb766d8d7732ad184101c8176adffd98db14a2f4394f11fa3e4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:15:09.065939 containerd[1549]: time="2025-08-13T01:15:09.065488385Z" level=info msg="Container d3487ad55feb7287f7037966ae99262ff7c2e025dd0633d67b35af3857e9e7c3: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:09.077430 containerd[1549]: time="2025-08-13T01:15:09.077399239Z" level=info msg="CreateContainer within sandbox \"8ecd7f008a24ddb766d8d7732ad184101c8176adffd98db14a2f4394f11fa3e4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d3487ad55feb7287f7037966ae99262ff7c2e025dd0633d67b35af3857e9e7c3\"" Aug 13 01:15:09.078618 containerd[1549]: time="2025-08-13T01:15:09.078078127Z" level=info msg="StartContainer for \"d3487ad55feb7287f7037966ae99262ff7c2e025dd0633d67b35af3857e9e7c3\"" Aug 13 01:15:09.079725 containerd[1549]: time="2025-08-13T01:15:09.079702870Z" level=info msg="connecting to shim d3487ad55feb7287f7037966ae99262ff7c2e025dd0633d67b35af3857e9e7c3" address="unix:///run/containerd/s/1606049c7484a6fafabae7dc7695045a95a42d54beca239d844615257143687d" protocol=ttrpc version=3 Aug 13 01:15:09.101691 systemd[1]: Started cri-containerd-d3487ad55feb7287f7037966ae99262ff7c2e025dd0633d67b35af3857e9e7c3.scope - libcontainer container d3487ad55feb7287f7037966ae99262ff7c2e025dd0633d67b35af3857e9e7c3. Aug 13 01:15:09.143178 containerd[1549]: time="2025-08-13T01:15:09.143143255Z" level=info msg="StartContainer for \"d3487ad55feb7287f7037966ae99262ff7c2e025dd0633d67b35af3857e9e7c3\" returns successfully" Aug 13 01:15:09.297374 kubelet[1922]: E0813 01:15:09.297335 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:09.382825 kubelet[1922]: E0813 01:15:09.382548 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdzq" podUID="706cc235-88f9-4461-aacc-ad5d00a0de1c" Aug 13 01:15:09.401605 kubelet[1922]: E0813 01:15:09.401375 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:09.410080 kubelet[1922]: I0813 01:15:09.410023 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kdpw8" podStartSLOduration=2.9058889519999997 podStartE2EDuration="4.410011524s" podCreationTimestamp="2025-08-13 01:15:05 +0000 UTC" firstStartedPulling="2025-08-13 01:15:07.55146309 +0000 UTC m=+2.736657075" lastFinishedPulling="2025-08-13 01:15:09.055585653 +0000 UTC m=+4.240779647" observedRunningTime="2025-08-13 01:15:09.409788092 +0000 UTC m=+4.594982076" watchObservedRunningTime="2025-08-13 01:15:09.410011524 +0000 UTC m=+4.595205519" Aug 13 01:15:09.432208 kubelet[1922]: E0813 01:15:09.432164 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.432252 kubelet[1922]: W0813 01:15:09.432235 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.432561 kubelet[1922]: E0813 01:15:09.432528 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.432906 kubelet[1922]: E0813 01:15:09.432885 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.432906 kubelet[1922]: W0813 01:15:09.432900 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.432976 kubelet[1922]: E0813 01:15:09.432909 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.433158 kubelet[1922]: E0813 01:15:09.433137 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.433158 kubelet[1922]: W0813 01:15:09.433149 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.433158 kubelet[1922]: E0813 01:15:09.433157 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.433423 kubelet[1922]: E0813 01:15:09.433402 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.433423 kubelet[1922]: W0813 01:15:09.433417 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.433423 kubelet[1922]: E0813 01:15:09.433425 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.433706 kubelet[1922]: E0813 01:15:09.433638 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.433706 kubelet[1922]: W0813 01:15:09.433653 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.433706 kubelet[1922]: E0813 01:15:09.433661 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.433861 kubelet[1922]: E0813 01:15:09.433846 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.433900 kubelet[1922]: W0813 01:15:09.433876 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.433900 kubelet[1922]: E0813 01:15:09.433885 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.434118 kubelet[1922]: E0813 01:15:09.434065 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.434118 kubelet[1922]: W0813 01:15:09.434077 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.434118 kubelet[1922]: E0813 01:15:09.434084 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.434289 kubelet[1922]: E0813 01:15:09.434259 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.434289 kubelet[1922]: W0813 01:15:09.434272 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.434289 kubelet[1922]: E0813 01:15:09.434279 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.434455 kubelet[1922]: E0813 01:15:09.434438 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.434455 kubelet[1922]: W0813 01:15:09.434451 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.434505 kubelet[1922]: E0813 01:15:09.434459 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.434732 kubelet[1922]: E0813 01:15:09.434698 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.434732 kubelet[1922]: W0813 01:15:09.434730 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.434781 kubelet[1922]: E0813 01:15:09.434738 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.434942 kubelet[1922]: E0813 01:15:09.434919 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.434942 kubelet[1922]: W0813 01:15:09.434934 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.434942 kubelet[1922]: E0813 01:15:09.434941 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.435139 kubelet[1922]: E0813 01:15:09.435114 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.435139 kubelet[1922]: W0813 01:15:09.435130 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.435139 kubelet[1922]: E0813 01:15:09.435137 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.435337 kubelet[1922]: E0813 01:15:09.435316 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.435337 kubelet[1922]: W0813 01:15:09.435331 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.435397 kubelet[1922]: E0813 01:15:09.435367 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.435559 kubelet[1922]: E0813 01:15:09.435514 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.435559 kubelet[1922]: W0813 01:15:09.435525 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.435622 kubelet[1922]: E0813 01:15:09.435586 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.435952 kubelet[1922]: E0813 01:15:09.435918 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.435952 kubelet[1922]: W0813 01:15:09.435933 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.435952 kubelet[1922]: E0813 01:15:09.435941 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.436223 kubelet[1922]: E0813 01:15:09.436199 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.436223 kubelet[1922]: W0813 01:15:09.436214 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.436223 kubelet[1922]: E0813 01:15:09.436222 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.436404 kubelet[1922]: E0813 01:15:09.436380 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.436404 kubelet[1922]: W0813 01:15:09.436396 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.436404 kubelet[1922]: E0813 01:15:09.436403 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.436608 kubelet[1922]: E0813 01:15:09.436586 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.436608 kubelet[1922]: W0813 01:15:09.436595 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.436608 kubelet[1922]: E0813 01:15:09.436602 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.436855 kubelet[1922]: E0813 01:15:09.436769 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.436855 kubelet[1922]: W0813 01:15:09.436781 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.436855 kubelet[1922]: E0813 01:15:09.436788 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.438017 kubelet[1922]: E0813 01:15:09.436928 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.438017 kubelet[1922]: W0813 01:15:09.436942 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.438017 kubelet[1922]: E0813 01:15:09.436949 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.444458 kubelet[1922]: E0813 01:15:09.444389 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.444458 kubelet[1922]: W0813 01:15:09.444404 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.444458 kubelet[1922]: E0813 01:15:09.444433 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.444823 kubelet[1922]: E0813 01:15:09.444798 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.444823 kubelet[1922]: W0813 01:15:09.444815 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.444823 kubelet[1922]: E0813 01:15:09.444824 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.447594 kubelet[1922]: E0813 01:15:09.446108 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.447594 kubelet[1922]: W0813 01:15:09.446122 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.447594 kubelet[1922]: E0813 01:15:09.446146 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.447594 kubelet[1922]: E0813 01:15:09.446327 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.447594 kubelet[1922]: W0813 01:15:09.446335 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.447594 kubelet[1922]: E0813 01:15:09.446472 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.447594 kubelet[1922]: E0813 01:15:09.446736 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.447594 kubelet[1922]: W0813 01:15:09.446743 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.447594 kubelet[1922]: E0813 01:15:09.446755 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.447594 kubelet[1922]: E0813 01:15:09.446933 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.447798 kubelet[1922]: W0813 01:15:09.446941 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.447798 kubelet[1922]: E0813 01:15:09.446963 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.447798 kubelet[1922]: E0813 01:15:09.447132 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.447798 kubelet[1922]: W0813 01:15:09.447139 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.447798 kubelet[1922]: E0813 01:15:09.447159 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.447798 kubelet[1922]: E0813 01:15:09.447324 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.447798 kubelet[1922]: W0813 01:15:09.447331 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.447798 kubelet[1922]: E0813 01:15:09.447350 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.447798 kubelet[1922]: E0813 01:15:09.447542 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.447798 kubelet[1922]: W0813 01:15:09.447549 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.448013 kubelet[1922]: E0813 01:15:09.447723 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.448013 kubelet[1922]: W0813 01:15:09.447730 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.448013 kubelet[1922]: E0813 01:15:09.447739 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.448013 kubelet[1922]: E0813 01:15:09.447992 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.448214 kubelet[1922]: E0813 01:15:09.448190 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.448214 kubelet[1922]: W0813 01:15:09.448206 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.448272 kubelet[1922]: E0813 01:15:09.448231 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:09.448642 kubelet[1922]: E0813 01:15:09.448624 1922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:15:09.448642 kubelet[1922]: W0813 01:15:09.448636 1922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:15:09.448693 kubelet[1922]: E0813 01:15:09.448644 1922 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:15:10.227776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072309233.mount: Deactivated successfully. Aug 13 01:15:10.291589 containerd[1549]: time="2025-08-13T01:15:10.291043017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:10.291882 containerd[1549]: time="2025-08-13T01:15:10.291810835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Aug 13 01:15:10.291991 containerd[1549]: time="2025-08-13T01:15:10.291970693Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:10.293390 containerd[1549]: time="2025-08-13T01:15:10.293370625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:10.293934 containerd[1549]: time="2025-08-13T01:15:10.293906341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.238135981s" Aug 13 01:15:10.293969 containerd[1549]: time="2025-08-13T01:15:10.293937935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:15:10.295599 containerd[1549]: time="2025-08-13T01:15:10.295561768Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:15:10.296265 containerd[1549]: time="2025-08-13T01:15:10.296244365Z" level=info msg="CreateContainer within sandbox \"f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:15:10.298292 kubelet[1922]: E0813 01:15:10.298257 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:10.305687 containerd[1549]: time="2025-08-13T01:15:10.305665876Z" level=info msg="Container 2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:10.308538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058288100.mount: Deactivated successfully. Aug 13 01:15:10.315681 containerd[1549]: time="2025-08-13T01:15:10.315659392Z" level=info msg="CreateContainer within sandbox \"f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb\"" Aug 13 01:15:10.316118 containerd[1549]: time="2025-08-13T01:15:10.316091511Z" level=info msg="StartContainer for \"2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb\"" Aug 13 01:15:10.317128 containerd[1549]: time="2025-08-13T01:15:10.317108580Z" level=info msg="connecting to shim 2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb" address="unix:///run/containerd/s/87312d5be0c14fae4266b52c10280c806a796d54a9ead695f1554cdec88c4115" protocol=ttrpc version=3 Aug 13 01:15:10.338715 systemd[1]: Started cri-containerd-2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb.scope - libcontainer container 2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb. Aug 13 01:15:10.375298 containerd[1549]: time="2025-08-13T01:15:10.375243703Z" level=info msg="StartContainer for \"2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb\" returns successfully" Aug 13 01:15:10.386359 systemd[1]: cri-containerd-2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb.scope: Deactivated successfully. Aug 13 01:15:10.389304 containerd[1549]: time="2025-08-13T01:15:10.389282628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb\" id:\"2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb\" pid:2346 exited_at:{seconds:1755047710 nanos:388887259}" Aug 13 01:15:10.389370 containerd[1549]: time="2025-08-13T01:15:10.389326445Z" level=info msg="received exit event container_id:\"2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb\" id:\"2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb\" pid:2346 exited_at:{seconds:1755047710 nanos:388887259}" Aug 13 01:15:10.407085 kubelet[1922]: E0813 01:15:10.406625 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:11.202292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d6c44e6181888da744dfca893b2ee463a8b6d8ab162c2c8ea66e10507aacfdb-rootfs.mount: Deactivated successfully. Aug 13 01:15:11.298837 kubelet[1922]: E0813 01:15:11.298795 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:11.382189 kubelet[1922]: E0813 01:15:11.382142 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdzq" podUID="706cc235-88f9-4461-aacc-ad5d00a0de1c" Aug 13 01:15:11.751757 containerd[1549]: time="2025-08-13T01:15:11.751704706Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:11.752631 containerd[1549]: time="2025-08-13T01:15:11.752429524Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:15:11.753180 containerd[1549]: time="2025-08-13T01:15:11.753155166Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:11.754460 containerd[1549]: time="2025-08-13T01:15:11.754435403Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:11.755091 containerd[1549]: time="2025-08-13T01:15:11.755069228Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.459467569s" Aug 13 01:15:11.755220 containerd[1549]: time="2025-08-13T01:15:11.755166068Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:15:11.756874 containerd[1549]: time="2025-08-13T01:15:11.756717776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:15:11.757142 containerd[1549]: time="2025-08-13T01:15:11.757116847Z" level=info msg="CreateContainer within sandbox \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:15:11.766399 containerd[1549]: time="2025-08-13T01:15:11.765957790Z" level=info msg="Container b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:11.773916 containerd[1549]: time="2025-08-13T01:15:11.773869117Z" level=info msg="CreateContainer within sandbox \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\"" Aug 13 01:15:11.774307 containerd[1549]: time="2025-08-13T01:15:11.774263908Z" level=info msg="StartContainer for \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\"" Aug 13 01:15:11.775182 containerd[1549]: time="2025-08-13T01:15:11.775162112Z" level=info msg="connecting to shim b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796" address="unix:///run/containerd/s/557bb1a208b312b579e00087ae8dab8b1074424aff3a40f21bc9087fa8de74f3" protocol=ttrpc version=3 Aug 13 01:15:11.792856 systemd[1]: Started cri-containerd-b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796.scope - libcontainer container b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796. Aug 13 01:15:11.822504 containerd[1549]: time="2025-08-13T01:15:11.822440292Z" level=info msg="StartContainer for \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" returns successfully" Aug 13 01:15:12.299361 kubelet[1922]: E0813 01:15:12.299309 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:13.299466 kubelet[1922]: E0813 01:15:13.299398 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:13.383977 kubelet[1922]: E0813 01:15:13.383685 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdzq" podUID="706cc235-88f9-4461-aacc-ad5d00a0de1c" Aug 13 01:15:14.178309 containerd[1549]: time="2025-08-13T01:15:14.178272057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:14.179307 containerd[1549]: time="2025-08-13T01:15:14.179140610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:15:14.179479 containerd[1549]: time="2025-08-13T01:15:14.179459008Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:14.180900 containerd[1549]: time="2025-08-13T01:15:14.180879541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:14.181703 containerd[1549]: time="2025-08-13T01:15:14.181663045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.42492352s" Aug 13 01:15:14.181748 containerd[1549]: time="2025-08-13T01:15:14.181703107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:15:14.183789 containerd[1549]: time="2025-08-13T01:15:14.183763738Z" level=info msg="CreateContainer within sandbox \"f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:15:14.196131 containerd[1549]: time="2025-08-13T01:15:14.193314081Z" level=info msg="Container b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:14.200376 containerd[1549]: time="2025-08-13T01:15:14.200343101Z" level=info msg="CreateContainer within sandbox \"f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb\"" Aug 13 01:15:14.201028 containerd[1549]: time="2025-08-13T01:15:14.201007071Z" level=info msg="StartContainer for \"b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb\"" Aug 13 01:15:14.202417 containerd[1549]: time="2025-08-13T01:15:14.202386338Z" level=info msg="connecting to shim b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb" address="unix:///run/containerd/s/87312d5be0c14fae4266b52c10280c806a796d54a9ead695f1554cdec88c4115" protocol=ttrpc version=3 Aug 13 01:15:14.221833 systemd[1]: Started cri-containerd-b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb.scope - libcontainer container b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb. Aug 13 01:15:14.268060 containerd[1549]: time="2025-08-13T01:15:14.268002052Z" level=info msg="StartContainer for \"b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb\" returns successfully" Aug 13 01:15:14.300274 kubelet[1922]: E0813 01:15:14.300226 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:14.439487 kubelet[1922]: I0813 01:15:14.438604 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-v7fpp" podStartSLOduration=5.236089862 podStartE2EDuration="9.438592561s" podCreationTimestamp="2025-08-13 01:15:05 +0000 UTC" firstStartedPulling="2025-08-13 01:15:07.5533206 +0000 UTC m=+2.738514584" lastFinishedPulling="2025-08-13 01:15:11.755823299 +0000 UTC m=+6.941017283" observedRunningTime="2025-08-13 01:15:12.425560427 +0000 UTC m=+7.610754411" watchObservedRunningTime="2025-08-13 01:15:14.438592561 +0000 UTC m=+9.623786545" Aug 13 01:15:14.718397 systemd[1]: cri-containerd-b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb.scope: Deactivated successfully. Aug 13 01:15:14.719185 systemd[1]: cri-containerd-b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb.scope: Consumed 510ms CPU time, 195.4M memory peak, 171.2M written to disk. Aug 13 01:15:14.720597 containerd[1549]: time="2025-08-13T01:15:14.720512194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb\" id:\"b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb\" pid:2441 exited_at:{seconds:1755047714 nanos:720144926}" Aug 13 01:15:14.720765 containerd[1549]: time="2025-08-13T01:15:14.720533750Z" level=info msg="received exit event container_id:\"b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb\" id:\"b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb\" pid:2441 exited_at:{seconds:1755047714 nanos:720144926}" Aug 13 01:15:14.742526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b47198f7fc6aba51121f2a16a539a0f0d958fdcad9fe197c48c66e4ca359c0eb-rootfs.mount: Deactivated successfully. Aug 13 01:15:14.809033 kubelet[1922]: I0813 01:15:14.808999 1922 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:15:14.839539 systemd[1]: Created slice kubepods-besteffort-pod014bd6a3_8dab_4cb8_9701_577f24c81f0a.slice - libcontainer container kubepods-besteffort-pod014bd6a3_8dab_4cb8_9701_577f24c81f0a.slice. Aug 13 01:15:14.847228 systemd[1]: Created slice kubepods-besteffort-pod7cf8f307_db15_4438_a8ed_91e671f6b9c3.slice - libcontainer container kubepods-besteffort-pod7cf8f307_db15_4438_a8ed_91e671f6b9c3.slice. Aug 13 01:15:14.852676 systemd[1]: Created slice kubepods-besteffort-podd6a1a47d_2556_4325_b483_facae1719336.slice - libcontainer container kubepods-besteffort-podd6a1a47d_2556_4325_b483_facae1719336.slice. Aug 13 01:15:14.857003 systemd[1]: Created slice kubepods-besteffort-pod40a60688_4101_413f_9647_7f5f3b8b0a05.slice - libcontainer container kubepods-besteffort-pod40a60688_4101_413f_9647_7f5f3b8b0a05.slice. Aug 13 01:15:14.886749 kubelet[1922]: I0813 01:15:14.886688 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40a60688-4101-413f-9647-7f5f3b8b0a05-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-mc8t4\" (UID: \"40a60688-4101-413f-9647-7f5f3b8b0a05\") " pod="calico-system/goldmane-768f4c5c69-mc8t4" Aug 13 01:15:14.886749 kubelet[1922]: I0813 01:15:14.886732 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ptn\" (UniqueName: \"kubernetes.io/projected/40a60688-4101-413f-9647-7f5f3b8b0a05-kube-api-access-k9ptn\") pod \"goldmane-768f4c5c69-mc8t4\" (UID: \"40a60688-4101-413f-9647-7f5f3b8b0a05\") " pod="calico-system/goldmane-768f4c5c69-mc8t4" Aug 13 01:15:14.887253 kubelet[1922]: I0813 01:15:14.887049 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/40a60688-4101-413f-9647-7f5f3b8b0a05-goldmane-key-pair\") pod \"goldmane-768f4c5c69-mc8t4\" (UID: \"40a60688-4101-413f-9647-7f5f3b8b0a05\") " pod="calico-system/goldmane-768f4c5c69-mc8t4" Aug 13 01:15:14.887253 kubelet[1922]: I0813 01:15:14.887078 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/014bd6a3-8dab-4cb8-9701-577f24c81f0a-calico-apiserver-certs\") pod \"calico-apiserver-bc89dd6cc-glrdh\" (UID: \"014bd6a3-8dab-4cb8-9701-577f24c81f0a\") " pod="calico-apiserver/calico-apiserver-bc89dd6cc-glrdh" Aug 13 01:15:14.887253 kubelet[1922]: I0813 01:15:14.887092 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2qv7\" (UniqueName: \"kubernetes.io/projected/014bd6a3-8dab-4cb8-9701-577f24c81f0a-kube-api-access-h2qv7\") pod \"calico-apiserver-bc89dd6cc-glrdh\" (UID: \"014bd6a3-8dab-4cb8-9701-577f24c81f0a\") " pod="calico-apiserver/calico-apiserver-bc89dd6cc-glrdh" Aug 13 01:15:14.887253 kubelet[1922]: I0813 01:15:14.887108 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cwbd\" (UniqueName: \"kubernetes.io/projected/7cf8f307-db15-4438-a8ed-91e671f6b9c3-kube-api-access-7cwbd\") pod \"calico-apiserver-bc89dd6cc-wzmp5\" (UID: \"7cf8f307-db15-4438-a8ed-91e671f6b9c3\") " pod="calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5" Aug 13 01:15:14.887253 kubelet[1922]: I0813 01:15:14.887122 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6a1a47d-2556-4325-b483-facae1719336-whisker-backend-key-pair\") pod \"whisker-6c85ff7dcf-4r5qm\" (UID: \"d6a1a47d-2556-4325-b483-facae1719336\") " pod="calico-system/whisker-6c85ff7dcf-4r5qm" Aug 13 01:15:14.887377 kubelet[1922]: I0813 01:15:14.887136 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a60688-4101-413f-9647-7f5f3b8b0a05-config\") pod \"goldmane-768f4c5c69-mc8t4\" (UID: \"40a60688-4101-413f-9647-7f5f3b8b0a05\") " pod="calico-system/goldmane-768f4c5c69-mc8t4" Aug 13 01:15:14.887377 kubelet[1922]: I0813 01:15:14.887152 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6a1a47d-2556-4325-b483-facae1719336-whisker-ca-bundle\") pod \"whisker-6c85ff7dcf-4r5qm\" (UID: \"d6a1a47d-2556-4325-b483-facae1719336\") " pod="calico-system/whisker-6c85ff7dcf-4r5qm" Aug 13 01:15:14.887377 kubelet[1922]: I0813 01:15:14.887166 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2ww\" (UniqueName: \"kubernetes.io/projected/d6a1a47d-2556-4325-b483-facae1719336-kube-api-access-5s2ww\") pod \"whisker-6c85ff7dcf-4r5qm\" (UID: \"d6a1a47d-2556-4325-b483-facae1719336\") " pod="calico-system/whisker-6c85ff7dcf-4r5qm" Aug 13 01:15:14.887377 kubelet[1922]: I0813 01:15:14.887184 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cf8f307-db15-4438-a8ed-91e671f6b9c3-calico-apiserver-certs\") pod \"calico-apiserver-bc89dd6cc-wzmp5\" (UID: \"7cf8f307-db15-4438-a8ed-91e671f6b9c3\") " pod="calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5" Aug 13 01:15:15.144361 containerd[1549]: time="2025-08-13T01:15:15.143855483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-glrdh,Uid:014bd6a3-8dab-4cb8-9701-577f24c81f0a,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:15:15.151647 containerd[1549]: time="2025-08-13T01:15:15.151330162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-wzmp5,Uid:7cf8f307-db15-4438-a8ed-91e671f6b9c3,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:15:15.157203 containerd[1549]: time="2025-08-13T01:15:15.157176635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-4r5qm,Uid:d6a1a47d-2556-4325-b483-facae1719336,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:15.160494 containerd[1549]: time="2025-08-13T01:15:15.160468593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-mc8t4,Uid:40a60688-4101-413f-9647-7f5f3b8b0a05,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:15.236084 containerd[1549]: time="2025-08-13T01:15:15.236030378Z" level=error msg="Failed to destroy network for sandbox \"7dcf742ffead4904b562834bbc1dd21d4cd6c634110377afafd3560065e0ed3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.239284 systemd[1]: run-netns-cni\x2da3c97ef1\x2d49eb\x2de27d\x2dcc3c\x2d3212aa64976c.mount: Deactivated successfully. Aug 13 01:15:15.242142 containerd[1549]: time="2025-08-13T01:15:15.240664351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-glrdh,Uid:014bd6a3-8dab-4cb8-9701-577f24c81f0a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dcf742ffead4904b562834bbc1dd21d4cd6c634110377afafd3560065e0ed3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.242239 kubelet[1922]: E0813 01:15:15.241178 1922 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dcf742ffead4904b562834bbc1dd21d4cd6c634110377afafd3560065e0ed3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.242239 kubelet[1922]: E0813 01:15:15.241245 1922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dcf742ffead4904b562834bbc1dd21d4cd6c634110377afafd3560065e0ed3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-glrdh" Aug 13 01:15:15.242239 kubelet[1922]: E0813 01:15:15.241265 1922 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dcf742ffead4904b562834bbc1dd21d4cd6c634110377afafd3560065e0ed3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-glrdh" Aug 13 01:15:15.242326 kubelet[1922]: E0813 01:15:15.241301 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc89dd6cc-glrdh_calico-apiserver(014bd6a3-8dab-4cb8-9701-577f24c81f0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc89dd6cc-glrdh_calico-apiserver(014bd6a3-8dab-4cb8-9701-577f24c81f0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dcf742ffead4904b562834bbc1dd21d4cd6c634110377afafd3560065e0ed3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc89dd6cc-glrdh" podUID="014bd6a3-8dab-4cb8-9701-577f24c81f0a" Aug 13 01:15:15.267016 containerd[1549]: time="2025-08-13T01:15:15.266914891Z" level=error msg="Failed to destroy network for sandbox \"12e1c4b068f7f1cb5dff884194c6badac7ab94691fb0a9881be665f0a608cbe1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.269645 containerd[1549]: time="2025-08-13T01:15:15.269616358Z" level=error msg="Failed to destroy network for sandbox \"174eccb4f29818ebffcc042260651e97dbfa14b276ba62cf08f1408c3cd9675e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.269774 systemd[1]: run-netns-cni\x2d86d0653f\x2d687d\x2df099\x2d12c8\x2d41a44ffa3b61.mount: Deactivated successfully. Aug 13 01:15:15.272422 systemd[1]: run-netns-cni\x2df95bb5b2\x2d2eef\x2d40d5\x2d4267\x2df40d2989322a.mount: Deactivated successfully. Aug 13 01:15:15.272965 containerd[1549]: time="2025-08-13T01:15:15.272932961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-4r5qm,Uid:d6a1a47d-2556-4325-b483-facae1719336,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e1c4b068f7f1cb5dff884194c6badac7ab94691fb0a9881be665f0a608cbe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.273995 kubelet[1922]: E0813 01:15:15.273433 1922 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e1c4b068f7f1cb5dff884194c6badac7ab94691fb0a9881be665f0a608cbe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.273995 kubelet[1922]: E0813 01:15:15.273958 1922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e1c4b068f7f1cb5dff884194c6badac7ab94691fb0a9881be665f0a608cbe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c85ff7dcf-4r5qm" Aug 13 01:15:15.274789 kubelet[1922]: E0813 01:15:15.274107 1922 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e1c4b068f7f1cb5dff884194c6badac7ab94691fb0a9881be665f0a608cbe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c85ff7dcf-4r5qm" Aug 13 01:15:15.274789 kubelet[1922]: E0813 01:15:15.274663 1922 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"174eccb4f29818ebffcc042260651e97dbfa14b276ba62cf08f1408c3cd9675e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.274789 kubelet[1922]: E0813 01:15:15.274705 1922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"174eccb4f29818ebffcc042260651e97dbfa14b276ba62cf08f1408c3cd9675e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5" Aug 13 01:15:15.274789 kubelet[1922]: E0813 01:15:15.274719 1922 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"174eccb4f29818ebffcc042260651e97dbfa14b276ba62cf08f1408c3cd9675e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5" Aug 13 01:15:15.274908 containerd[1549]: time="2025-08-13T01:15:15.274163208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-wzmp5,Uid:7cf8f307-db15-4438-a8ed-91e671f6b9c3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"174eccb4f29818ebffcc042260651e97dbfa14b276ba62cf08f1408c3cd9675e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.274954 kubelet[1922]: E0813 01:15:15.274746 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc89dd6cc-wzmp5_calico-apiserver(7cf8f307-db15-4438-a8ed-91e671f6b9c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc89dd6cc-wzmp5_calico-apiserver(7cf8f307-db15-4438-a8ed-91e671f6b9c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"174eccb4f29818ebffcc042260651e97dbfa14b276ba62cf08f1408c3cd9675e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5" podUID="7cf8f307-db15-4438-a8ed-91e671f6b9c3" Aug 13 01:15:15.275024 kubelet[1922]: E0813 01:15:15.275003 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c85ff7dcf-4r5qm_calico-system(d6a1a47d-2556-4325-b483-facae1719336)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c85ff7dcf-4r5qm_calico-system(d6a1a47d-2556-4325-b483-facae1719336)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12e1c4b068f7f1cb5dff884194c6badac7ab94691fb0a9881be665f0a608cbe1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c85ff7dcf-4r5qm" podUID="d6a1a47d-2556-4325-b483-facae1719336" Aug 13 01:15:15.280525 containerd[1549]: time="2025-08-13T01:15:15.280501441Z" level=error msg="Failed to destroy network for sandbox \"fcf9d27e0cdc6a6ec3880884a97a2c45b330b012f543dda1973825419e03276b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.281916 systemd[1]: run-netns-cni\x2d6fa2c106\x2d7c6b\x2d48f5\x2db45e\x2dc30d0b6359e1.mount: Deactivated successfully. Aug 13 01:15:15.282878 containerd[1549]: time="2025-08-13T01:15:15.282681597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-mc8t4,Uid:40a60688-4101-413f-9647-7f5f3b8b0a05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcf9d27e0cdc6a6ec3880884a97a2c45b330b012f543dda1973825419e03276b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.282942 kubelet[1922]: E0813 01:15:15.282796 1922 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcf9d27e0cdc6a6ec3880884a97a2c45b330b012f543dda1973825419e03276b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.282942 kubelet[1922]: E0813 01:15:15.282824 1922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcf9d27e0cdc6a6ec3880884a97a2c45b330b012f543dda1973825419e03276b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-mc8t4" Aug 13 01:15:15.282942 kubelet[1922]: E0813 01:15:15.282841 1922 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcf9d27e0cdc6a6ec3880884a97a2c45b330b012f543dda1973825419e03276b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-mc8t4" Aug 13 01:15:15.283055 kubelet[1922]: E0813 01:15:15.282874 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-mc8t4_calico-system(40a60688-4101-413f-9647-7f5f3b8b0a05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-mc8t4_calico-system(40a60688-4101-413f-9647-7f5f3b8b0a05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcf9d27e0cdc6a6ec3880884a97a2c45b330b012f543dda1973825419e03276b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-mc8t4" podUID="40a60688-4101-413f-9647-7f5f3b8b0a05" Aug 13 01:15:15.301301 kubelet[1922]: E0813 01:15:15.301280 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:15.389605 systemd[1]: Created slice kubepods-besteffort-pod706cc235_88f9_4461_aacc_ad5d00a0de1c.slice - libcontainer container kubepods-besteffort-pod706cc235_88f9_4461_aacc_ad5d00a0de1c.slice. Aug 13 01:15:15.392590 containerd[1549]: time="2025-08-13T01:15:15.392547735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpdzq,Uid:706cc235-88f9-4461-aacc-ad5d00a0de1c,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:15.431651 containerd[1549]: time="2025-08-13T01:15:15.430705581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:15:15.433856 containerd[1549]: time="2025-08-13T01:15:15.433806541Z" level=error msg="Failed to destroy network for sandbox \"a15dcfb3c1bf8c6bb5a77900957e6c8943d4ae74b149bfd1d398b3d1f09285b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.434742 containerd[1549]: time="2025-08-13T01:15:15.434706789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpdzq,Uid:706cc235-88f9-4461-aacc-ad5d00a0de1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15dcfb3c1bf8c6bb5a77900957e6c8943d4ae74b149bfd1d398b3d1f09285b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.434995 kubelet[1922]: E0813 01:15:15.434969 1922 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15dcfb3c1bf8c6bb5a77900957e6c8943d4ae74b149bfd1d398b3d1f09285b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:15.435245 kubelet[1922]: E0813 01:15:15.435008 1922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15dcfb3c1bf8c6bb5a77900957e6c8943d4ae74b149bfd1d398b3d1f09285b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qpdzq" Aug 13 01:15:15.435245 kubelet[1922]: E0813 01:15:15.435026 1922 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15dcfb3c1bf8c6bb5a77900957e6c8943d4ae74b149bfd1d398b3d1f09285b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qpdzq" Aug 13 01:15:15.435302 kubelet[1922]: E0813 01:15:15.435237 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qpdzq_calico-system(706cc235-88f9-4461-aacc-ad5d00a0de1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qpdzq_calico-system(706cc235-88f9-4461-aacc-ad5d00a0de1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a15dcfb3c1bf8c6bb5a77900957e6c8943d4ae74b149bfd1d398b3d1f09285b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qpdzq" podUID="706cc235-88f9-4461-aacc-ad5d00a0de1c" Aug 13 01:15:16.301867 kubelet[1922]: E0813 01:15:16.301819 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:17.303203 kubelet[1922]: E0813 01:15:17.303153 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:18.303717 kubelet[1922]: E0813 01:15:18.303655 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:18.650122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238973008.mount: Deactivated successfully. Aug 13 01:15:18.686607 containerd[1549]: time="2025-08-13T01:15:18.686555548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:18.689625 containerd[1549]: time="2025-08-13T01:15:18.689410747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:15:18.691593 containerd[1549]: time="2025-08-13T01:15:18.691227611Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:18.693904 containerd[1549]: time="2025-08-13T01:15:18.693868697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.263129945s" Aug 13 01:15:18.693904 containerd[1549]: time="2025-08-13T01:15:18.693902077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:15:18.694324 containerd[1549]: time="2025-08-13T01:15:18.694300979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:18.708584 containerd[1549]: time="2025-08-13T01:15:18.708457798Z" level=info msg="CreateContainer within sandbox \"f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:15:18.718721 containerd[1549]: time="2025-08-13T01:15:18.717755380Z" level=info msg="Container d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:18.725636 containerd[1549]: time="2025-08-13T01:15:18.725597693Z" level=info msg="CreateContainer within sandbox \"f3b26b11ad2870d36b5d859b3aa16d674e206dcdfd1e6e2d23d69472b6cce7dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454\"" Aug 13 01:15:18.726163 containerd[1549]: time="2025-08-13T01:15:18.726107745Z" level=info msg="StartContainer for \"d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454\"" Aug 13 01:15:18.727255 containerd[1549]: time="2025-08-13T01:15:18.727224578Z" level=info msg="connecting to shim d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454" address="unix:///run/containerd/s/87312d5be0c14fae4266b52c10280c806a796d54a9ead695f1554cdec88c4115" protocol=ttrpc version=3 Aug 13 01:15:18.755699 systemd[1]: Started cri-containerd-d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454.scope - libcontainer container d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454. Aug 13 01:15:18.800475 containerd[1549]: time="2025-08-13T01:15:18.800447878Z" level=info msg="StartContainer for \"d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454\" returns successfully" Aug 13 01:15:18.879637 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:15:18.879708 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:15:19.304033 kubelet[1922]: E0813 01:15:19.303984 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:19.454625 systemd[1]: Created slice kubepods-besteffort-podc70154b0_6d90_4dae_8732_d244ece81fb7.slice - libcontainer container kubepods-besteffort-podc70154b0_6d90_4dae_8732_d244ece81fb7.slice. Aug 13 01:15:19.471801 kubelet[1922]: I0813 01:15:19.471757 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rfw6k" podStartSLOduration=3.329709332 podStartE2EDuration="14.471745355s" podCreationTimestamp="2025-08-13 01:15:05 +0000 UTC" firstStartedPulling="2025-08-13 01:15:07.552877138 +0000 UTC m=+2.738071123" lastFinishedPulling="2025-08-13 01:15:18.694913162 +0000 UTC m=+13.880107146" observedRunningTime="2025-08-13 01:15:19.470860642 +0000 UTC m=+14.656054636" watchObservedRunningTime="2025-08-13 01:15:19.471745355 +0000 UTC m=+14.656939339" Aug 13 01:15:19.512896 kubelet[1922]: I0813 01:15:19.512874 1922 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9df\" (UniqueName: \"kubernetes.io/projected/c70154b0-6d90-4dae-8732-d244ece81fb7-kube-api-access-hr9df\") pod \"nginx-deployment-7fcdb87857-5272h\" (UID: \"c70154b0-6d90-4dae-8732-d244ece81fb7\") " pod="default/nginx-deployment-7fcdb87857-5272h" Aug 13 01:15:19.758681 containerd[1549]: time="2025-08-13T01:15:19.758602324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5272h,Uid:c70154b0-6d90-4dae-8732-d244ece81fb7,Namespace:default,Attempt:0,}" Aug 13 01:15:19.865343 systemd-networkd[1460]: calif2dab148ba3: Link UP Aug 13 01:15:19.866112 systemd-networkd[1460]: calif2dab148ba3: Gained carrier Aug 13 01:15:19.878664 containerd[1549]: 2025-08-13 01:15:19.785 [INFO][2680] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:15:19.878664 containerd[1549]: 2025-08-13 01:15:19.795 [INFO][2680] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0 nginx-deployment-7fcdb87857- default c70154b0-6d90-4dae-8732-d244ece81fb7 4217 0 2025-08-13 01:15:19 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 192.168.178.99 nginx-deployment-7fcdb87857-5272h eth0 default [] [] [kns.default ksa.default.default] calif2dab148ba3 [] [] }} ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-5272h" WorkloadEndpoint="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-" Aug 13 01:15:19.878664 containerd[1549]: 2025-08-13 01:15:19.795 [INFO][2680] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-5272h" WorkloadEndpoint="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:19.878664 containerd[1549]: 2025-08-13 01:15:19.818 [INFO][2694] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.818 [INFO][2694] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2e0), Attrs:map[string]string{"namespace":"default", "node":"192.168.178.99", "pod":"nginx-deployment-7fcdb87857-5272h", "timestamp":"2025-08-13 01:15:19.81826849 +0000 UTC"}, Hostname:"192.168.178.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.818 [INFO][2694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.818 [INFO][2694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.818 [INFO][2694] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.178.99' Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.825 [INFO][2694] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" host="192.168.178.99" Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.831 [INFO][2694] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.178.99" Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.836 [INFO][2694] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.839 [INFO][2694] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.841 [INFO][2694] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:19.878898 containerd[1549]: 2025-08-13 01:15:19.841 [INFO][2694] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" host="192.168.178.99" Aug 13 01:15:19.879387 containerd[1549]: 2025-08-13 01:15:19.843 [INFO][2694] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf Aug 13 01:15:19.879387 containerd[1549]: 2025-08-13 01:15:19.847 [INFO][2694] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" host="192.168.178.99" Aug 13 01:15:19.879387 containerd[1549]: 2025-08-13 01:15:19.853 [INFO][2694] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.65/26] block=192.168.81.64/26 handle="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" host="192.168.178.99" Aug 13 01:15:19.879387 containerd[1549]: 2025-08-13 01:15:19.853 [INFO][2694] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.65/26] handle="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" host="192.168.178.99" Aug 13 01:15:19.879387 containerd[1549]: 2025-08-13 01:15:19.853 [INFO][2694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:19.879387 containerd[1549]: 2025-08-13 01:15:19.853 [INFO][2694] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.65/26] IPv6=[] ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:19.879533 containerd[1549]: 2025-08-13 01:15:19.857 [INFO][2680] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-5272h" WorkloadEndpoint="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"c70154b0-6d90-4dae-8732-d244ece81fb7", ResourceVersion:"4217", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.178.99", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-5272h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif2dab148ba3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:19.879533 containerd[1549]: 2025-08-13 01:15:19.857 [INFO][2680] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.65/32] ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-5272h" WorkloadEndpoint="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:19.879667 containerd[1549]: 2025-08-13 01:15:19.857 [INFO][2680] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2dab148ba3 ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-5272h" WorkloadEndpoint="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:19.879667 containerd[1549]: 2025-08-13 01:15:19.866 [INFO][2680] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-5272h" WorkloadEndpoint="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:19.879711 containerd[1549]: 2025-08-13 01:15:19.866 [INFO][2680] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-5272h" WorkloadEndpoint="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"c70154b0-6d90-4dae-8732-d244ece81fb7", ResourceVersion:"4217", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 15, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.178.99", ContainerID:"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf", Pod:"nginx-deployment-7fcdb87857-5272h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.81.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif2dab148ba3", MAC:"6a:93:cc:5c:bf:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:19.879784 containerd[1549]: 2025-08-13 01:15:19.875 [INFO][2680] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Namespace="default" Pod="nginx-deployment-7fcdb87857-5272h" WorkloadEndpoint="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:19.900811 containerd[1549]: time="2025-08-13T01:15:19.900747366Z" level=info msg="connecting to shim e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" address="unix:///run/containerd/s/5e85ff187fe2b0f0e7014da5f0b06c905dc94471239f8175cd5756357a78a90e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:19.923701 systemd[1]: Started cri-containerd-e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf.scope - libcontainer container e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf. Aug 13 01:15:19.964033 containerd[1549]: time="2025-08-13T01:15:19.963985906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5272h,Uid:c70154b0-6d90-4dae-8732-d244ece81fb7,Namespace:default,Attempt:0,} returns sandbox id \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\"" Aug 13 01:15:19.965551 containerd[1549]: time="2025-08-13T01:15:19.965527023Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 01:15:20.305094 kubelet[1922]: E0813 01:15:20.305011 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:20.454037 kubelet[1922]: I0813 01:15:20.453975 1922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:15:20.638823 systemd-networkd[1460]: vxlan.calico: Link UP Aug 13 01:15:20.638935 systemd-networkd[1460]: vxlan.calico: Gained carrier Aug 13 01:15:21.306040 kubelet[1922]: E0813 01:15:21.305987 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:21.862161 systemd-networkd[1460]: calif2dab148ba3: Gained IPv6LL Aug 13 01:15:21.903266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250834698.mount: Deactivated successfully. Aug 13 01:15:22.053379 systemd-networkd[1460]: vxlan.calico: Gained IPv6LL Aug 13 01:15:22.306760 kubelet[1922]: E0813 01:15:22.306707 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:22.845632 containerd[1549]: time="2025-08-13T01:15:22.845595750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:22.846350 containerd[1549]: time="2025-08-13T01:15:22.846309207Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73303204" Aug 13 01:15:22.846901 containerd[1549]: time="2025-08-13T01:15:22.846871205Z" level=info msg="ImageCreate event name:\"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:22.849382 containerd[1549]: time="2025-08-13T01:15:22.848772486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:22.849941 containerd[1549]: time="2025-08-13T01:15:22.849910017Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 2.884357187s" Aug 13 01:15:22.850015 containerd[1549]: time="2025-08-13T01:15:22.849999891Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 13 01:15:22.856140 containerd[1549]: time="2025-08-13T01:15:22.856114584Z" level=info msg="CreateContainer within sandbox \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Aug 13 01:15:22.863528 containerd[1549]: time="2025-08-13T01:15:22.861696359Z" level=info msg="Container 7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:22.874509 containerd[1549]: time="2025-08-13T01:15:22.874477811Z" level=info msg="CreateContainer within sandbox \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\"" Aug 13 01:15:22.875174 containerd[1549]: time="2025-08-13T01:15:22.875077509Z" level=info msg="StartContainer for \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\"" Aug 13 01:15:22.875873 containerd[1549]: time="2025-08-13T01:15:22.875835813Z" level=info msg="connecting to shim 7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd" address="unix:///run/containerd/s/5e85ff187fe2b0f0e7014da5f0b06c905dc94471239f8175cd5756357a78a90e" protocol=ttrpc version=3 Aug 13 01:15:22.903703 systemd[1]: Started cri-containerd-7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd.scope - libcontainer container 7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd. Aug 13 01:15:22.935890 containerd[1549]: time="2025-08-13T01:15:22.935815716Z" level=info msg="StartContainer for \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" returns successfully" Aug 13 01:15:23.306910 kubelet[1922]: E0813 01:15:23.306855 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:23.973127 kubelet[1922]: I0813 01:15:23.972938 1922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:15:24.040691 containerd[1549]: time="2025-08-13T01:15:24.040532083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454\" id:\"5774118ce6f22beefcac63150f554d5079089670f810ab43bf1b0eb70aea6061\" pid:3033 exited_at:{seconds:1755047724 nanos:40220143}" Aug 13 01:15:24.058504 kubelet[1922]: I0813 01:15:24.058311 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-5272h" podStartSLOduration=2.170751572 podStartE2EDuration="5.058294472s" podCreationTimestamp="2025-08-13 01:15:19 +0000 UTC" firstStartedPulling="2025-08-13 01:15:19.965009044 +0000 UTC m=+15.150203028" lastFinishedPulling="2025-08-13 01:15:22.852551944 +0000 UTC m=+18.037745928" observedRunningTime="2025-08-13 01:15:23.477073138 +0000 UTC m=+18.662267122" watchObservedRunningTime="2025-08-13 01:15:24.058294472 +0000 UTC m=+19.243488456" Aug 13 01:15:24.104994 containerd[1549]: time="2025-08-13T01:15:24.104935079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3f5c4196feece6efd5c083ce7ed10c988a1ab37d507b1b5aac0fa632261c454\" id:\"c45103ffb62512b2d54b7c35b46248def16c2a1713e3559edf9c6e337bc597a5\" pid:3058 exited_at:{seconds:1755047724 nanos:104629365}" Aug 13 01:15:24.307282 kubelet[1922]: E0813 01:15:24.307196 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:24.597331 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:15:25.293974 kubelet[1922]: E0813 01:15:25.293944 1922 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:25.307310 kubelet[1922]: E0813 01:15:25.307288 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:25.405018 kubelet[1922]: I0813 01:15:25.404984 1922 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:25.405018 kubelet[1922]: I0813 01:15:25.405016 1922 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:25.406245 kubelet[1922]: I0813 01:15:25.406232 1922 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:25.413298 kubelet[1922]: I0813 01:15:25.412870 1922 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:25.413298 kubelet[1922]: I0813 01:15:25.412920 1922 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5","calico-system/goldmane-768f4c5c69-mc8t4","calico-system/whisker-6c85ff7dcf-4r5qm","calico-apiserver/calico-apiserver-bc89dd6cc-glrdh","calico-system/csi-node-driver-qpdzq","default/nginx-deployment-7fcdb87857-5272h","tigera-operator/tigera-operator-747864d56d-v7fpp","calico-system/calico-node-rfw6k","kube-system/kube-proxy-kdpw8"] Aug 13 01:15:25.418712 kubelet[1922]: I0813 01:15:25.418684 1922 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5" Aug 13 01:15:25.418712 kubelet[1922]: I0813 01:15:25.418698 1922 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5"] Aug 13 01:15:25.444361 kubelet[1922]: I0813 01:15:25.444326 1922 kubelet.go:2351] "Pod admission denied" podUID="e9c79a8d-860c-42ff-8f60-e94625eb2424" pod="calico-apiserver/calico-apiserver-bc89dd6cc-829q2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:25.446181 kubelet[1922]: I0813 01:15:25.446158 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cwbd\" (UniqueName: \"kubernetes.io/projected/7cf8f307-db15-4438-a8ed-91e671f6b9c3-kube-api-access-7cwbd\") pod \"7cf8f307-db15-4438-a8ed-91e671f6b9c3\" (UID: \"7cf8f307-db15-4438-a8ed-91e671f6b9c3\") " Aug 13 01:15:25.446232 kubelet[1922]: I0813 01:15:25.446188 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cf8f307-db15-4438-a8ed-91e671f6b9c3-calico-apiserver-certs\") pod \"7cf8f307-db15-4438-a8ed-91e671f6b9c3\" (UID: \"7cf8f307-db15-4438-a8ed-91e671f6b9c3\") " Aug 13 01:15:25.450162 systemd[1]: var-lib-kubelet-pods-7cf8f307\x2ddb15\x2d4438\x2da8ed\x2d91e671f6b9c3-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:15:25.451956 kubelet[1922]: I0813 01:15:25.451928 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf8f307-db15-4438-a8ed-91e671f6b9c3-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7cf8f307-db15-4438-a8ed-91e671f6b9c3" (UID: "7cf8f307-db15-4438-a8ed-91e671f6b9c3"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:15:25.452992 kubelet[1922]: I0813 01:15:25.452957 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf8f307-db15-4438-a8ed-91e671f6b9c3-kube-api-access-7cwbd" (OuterVolumeSpecName: "kube-api-access-7cwbd") pod "7cf8f307-db15-4438-a8ed-91e671f6b9c3" (UID: "7cf8f307-db15-4438-a8ed-91e671f6b9c3"). InnerVolumeSpecName "kube-api-access-7cwbd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:15:25.453964 systemd[1]: var-lib-kubelet-pods-7cf8f307\x2ddb15\x2d4438\x2da8ed\x2d91e671f6b9c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7cwbd.mount: Deactivated successfully. Aug 13 01:15:25.468429 systemd[1]: Removed slice kubepods-besteffort-pod7cf8f307_db15_4438_a8ed_91e671f6b9c3.slice - libcontainer container kubepods-besteffort-pod7cf8f307_db15_4438_a8ed_91e671f6b9c3.slice. Aug 13 01:15:25.479839 kubelet[1922]: I0813 01:15:25.479807 1922 kubelet.go:2351] "Pod admission denied" podUID="f97111b9-9556-46b3-ab8f-fc7775338a26" pod="calico-apiserver/calico-apiserver-bc89dd6cc-rwc2q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:25.509586 kubelet[1922]: I0813 01:15:25.509545 1922 kubelet.go:2351] "Pod admission denied" podUID="8bae6ace-ce85-4447-a626-b18bcc088e36" pod="calico-apiserver/calico-apiserver-bc89dd6cc-fp4xd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:25.512295 kubelet[1922]: I0813 01:15:25.512259 1922 status_manager.go:890] "Failed to get status for pod" podUID="8bae6ace-ce85-4447-a626-b18bcc088e36" pod="calico-apiserver/calico-apiserver-bc89dd6cc-fp4xd" err="pods \"calico-apiserver-bc89dd6cc-fp4xd\" is forbidden: User \"system:node:192.168.178.99\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '192.168.178.99' and this object" Aug 13 01:15:25.546893 kubelet[1922]: I0813 01:15:25.546485 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7cwbd\" (UniqueName: \"kubernetes.io/projected/7cf8f307-db15-4438-a8ed-91e671f6b9c3-kube-api-access-7cwbd\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:25.546893 kubelet[1922]: I0813 01:15:25.546507 1922 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cf8f307-db15-4438-a8ed-91e671f6b9c3-calico-apiserver-certs\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:26.307419 kubelet[1922]: E0813 01:15:26.307374 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:26.383185 containerd[1549]: time="2025-08-13T01:15:26.383147555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-mc8t4,Uid:40a60688-4101-413f-9647-7f5f3b8b0a05,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:26.419301 kubelet[1922]: I0813 01:15:26.419255 1922 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-wzmp5"] Aug 13 01:15:26.428661 kubelet[1922]: I0813 01:15:26.428477 1922 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:26.428661 kubelet[1922]: I0813 01:15:26.428515 1922 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:26.429858 kubelet[1922]: I0813 01:15:26.429839 1922 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:26.439642 kubelet[1922]: I0813 01:15:26.438583 1922 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:26.439642 kubelet[1922]: I0813 01:15:26.438715 1922 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-glrdh","calico-system/goldmane-768f4c5c69-mc8t4","calico-system/whisker-6c85ff7dcf-4r5qm","calico-system/csi-node-driver-qpdzq","default/nginx-deployment-7fcdb87857-5272h","tigera-operator/tigera-operator-747864d56d-v7fpp","calico-system/calico-node-rfw6k","kube-system/kube-proxy-kdpw8"] Aug 13 01:15:26.444612 kubelet[1922]: I0813 01:15:26.444591 1922 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-bc89dd6cc-glrdh" Aug 13 01:15:26.444612 kubelet[1922]: I0813 01:15:26.444610 1922 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-glrdh"] Aug 13 01:15:26.532713 systemd-networkd[1460]: cali46bb075125b: Link UP Aug 13 01:15:26.533837 systemd-networkd[1460]: cali46bb075125b: Gained carrier Aug 13 01:15:26.550281 containerd[1549]: 2025-08-13 01:15:26.426 [INFO][3084] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0 goldmane-768f4c5c69- calico-system 40a60688-4101-413f-9647-7f5f3b8b0a05 4157 0 2025-08-13 01:14:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 192.168.178.99 goldmane-768f4c5c69-mc8t4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali46bb075125b [] [] }} ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Namespace="calico-system" Pod="goldmane-768f4c5c69-mc8t4" WorkloadEndpoint="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-" Aug 13 01:15:26.550281 containerd[1549]: 2025-08-13 01:15:26.427 [INFO][3084] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Namespace="calico-system" Pod="goldmane-768f4c5c69-mc8t4" WorkloadEndpoint="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:26.550281 containerd[1549]: 2025-08-13 01:15:26.462 [INFO][3094] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.462 [INFO][3094] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.178.99", "pod":"goldmane-768f4c5c69-mc8t4", "timestamp":"2025-08-13 01:15:26.462539451 +0000 UTC"}, Hostname:"192.168.178.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.462 [INFO][3094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.462 [INFO][3094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.462 [INFO][3094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.178.99' Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.475 [INFO][3094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" host="192.168.178.99" Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.483 [INFO][3094] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.178.99" Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.492 [INFO][3094] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.496 [INFO][3094] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.501 [INFO][3094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:26.550509 containerd[1549]: 2025-08-13 01:15:26.502 [INFO][3094] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" host="192.168.178.99" Aug 13 01:15:26.550770 containerd[1549]: 2025-08-13 01:15:26.504 [INFO][3094] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7 Aug 13 01:15:26.550770 containerd[1549]: 2025-08-13 01:15:26.514 [INFO][3094] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" host="192.168.178.99" Aug 13 01:15:26.550770 containerd[1549]: 2025-08-13 01:15:26.527 [INFO][3094] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.66/26] block=192.168.81.64/26 handle="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" host="192.168.178.99" Aug 13 01:15:26.550770 containerd[1549]: 2025-08-13 01:15:26.527 [INFO][3094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.66/26] handle="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" host="192.168.178.99" Aug 13 01:15:26.550770 containerd[1549]: 2025-08-13 01:15:26.527 [INFO][3094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:26.550770 containerd[1549]: 2025-08-13 01:15:26.527 [INFO][3094] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.66/26] IPv6=[] ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:26.550884 containerd[1549]: 2025-08-13 01:15:26.529 [INFO][3084] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Namespace="calico-system" Pod="goldmane-768f4c5c69-mc8t4" WorkloadEndpoint="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"40a60688-4101-413f-9647-7f5f3b8b0a05", ResourceVersion:"4157", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.178.99", ContainerID:"", Pod:"goldmane-768f4c5c69-mc8t4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46bb075125b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:26.550884 containerd[1549]: 2025-08-13 01:15:26.529 [INFO][3084] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.66/32] ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Namespace="calico-system" Pod="goldmane-768f4c5c69-mc8t4" WorkloadEndpoint="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:26.550950 containerd[1549]: 2025-08-13 01:15:26.529 [INFO][3084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46bb075125b ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Namespace="calico-system" Pod="goldmane-768f4c5c69-mc8t4" WorkloadEndpoint="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:26.550950 containerd[1549]: 2025-08-13 01:15:26.532 [INFO][3084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Namespace="calico-system" Pod="goldmane-768f4c5c69-mc8t4" WorkloadEndpoint="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:26.550983 containerd[1549]: 2025-08-13 01:15:26.533 [INFO][3084] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Namespace="calico-system" Pod="goldmane-768f4c5c69-mc8t4" WorkloadEndpoint="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"40a60688-4101-413f-9647-7f5f3b8b0a05", ResourceVersion:"4157", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 14, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.178.99", ContainerID:"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7", Pod:"goldmane-768f4c5c69-mc8t4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46bb075125b", MAC:"0e:b1:f6:6c:00:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:26.551032 containerd[1549]: 2025-08-13 01:15:26.545 [INFO][3084] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Namespace="calico-system" Pod="goldmane-768f4c5c69-mc8t4" WorkloadEndpoint="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:26.551623 kubelet[1922]: I0813 01:15:26.551593 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/014bd6a3-8dab-4cb8-9701-577f24c81f0a-calico-apiserver-certs\") pod \"014bd6a3-8dab-4cb8-9701-577f24c81f0a\" (UID: \"014bd6a3-8dab-4cb8-9701-577f24c81f0a\") " Aug 13 01:15:26.552254 kubelet[1922]: I0813 01:15:26.551632 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2qv7\" (UniqueName: \"kubernetes.io/projected/014bd6a3-8dab-4cb8-9701-577f24c81f0a-kube-api-access-h2qv7\") pod \"014bd6a3-8dab-4cb8-9701-577f24c81f0a\" (UID: \"014bd6a3-8dab-4cb8-9701-577f24c81f0a\") " Aug 13 01:15:26.558860 systemd[1]: var-lib-kubelet-pods-014bd6a3\x2d8dab\x2d4cb8\x2d9701\x2d577f24c81f0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh2qv7.mount: Deactivated successfully. Aug 13 01:15:26.564362 systemd[1]: var-lib-kubelet-pods-014bd6a3\x2d8dab\x2d4cb8\x2d9701\x2d577f24c81f0a-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:15:26.565014 kubelet[1922]: I0813 01:15:26.564985 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014bd6a3-8dab-4cb8-9701-577f24c81f0a-kube-api-access-h2qv7" (OuterVolumeSpecName: "kube-api-access-h2qv7") pod "014bd6a3-8dab-4cb8-9701-577f24c81f0a" (UID: "014bd6a3-8dab-4cb8-9701-577f24c81f0a"). InnerVolumeSpecName "kube-api-access-h2qv7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:15:26.567556 kubelet[1922]: I0813 01:15:26.567533 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014bd6a3-8dab-4cb8-9701-577f24c81f0a-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "014bd6a3-8dab-4cb8-9701-577f24c81f0a" (UID: "014bd6a3-8dab-4cb8-9701-577f24c81f0a"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:15:26.580736 containerd[1549]: time="2025-08-13T01:15:26.580675595Z" level=info msg="connecting to shim dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" address="unix:///run/containerd/s/1ec1be34fd44e9db739fea6c6fed069de9b42ea2f1fa8910fc63eeb011ba9820" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:26.610708 systemd[1]: Started cri-containerd-dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7.scope - libcontainer container dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7. Aug 13 01:15:26.651886 kubelet[1922]: I0813 01:15:26.651864 1922 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/014bd6a3-8dab-4cb8-9701-577f24c81f0a-calico-apiserver-certs\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:26.651973 kubelet[1922]: I0813 01:15:26.651959 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2qv7\" (UniqueName: \"kubernetes.io/projected/014bd6a3-8dab-4cb8-9701-577f24c81f0a-kube-api-access-h2qv7\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:26.658135 containerd[1549]: time="2025-08-13T01:15:26.658102985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-mc8t4,Uid:40a60688-4101-413f-9647-7f5f3b8b0a05,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\"" Aug 13 01:15:26.659723 containerd[1549]: time="2025-08-13T01:15:26.659694753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 01:15:26.771438 systemd[1]: Removed slice kubepods-besteffort-pod014bd6a3_8dab_4cb8_9701_577f24c81f0a.slice - libcontainer container kubepods-besteffort-pod014bd6a3_8dab_4cb8_9701_577f24c81f0a.slice. Aug 13 01:15:27.308351 kubelet[1922]: E0813 01:15:27.308293 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:27.384908 containerd[1549]: time="2025-08-13T01:15:27.384785403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-4r5qm,Uid:d6a1a47d-2556-4325-b483-facae1719336,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:27.445246 kubelet[1922]: I0813 01:15:27.445202 1922 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-glrdh"] Aug 13 01:15:27.455309 kubelet[1922]: I0813 01:15:27.454854 1922 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:27.455309 kubelet[1922]: I0813 01:15:27.454880 1922 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:27.458695 kubelet[1922]: I0813 01:15:27.458405 1922 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:27.472899 kubelet[1922]: I0813 01:15:27.472887 1922 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:27.472899 kubelet[1922]: I0813 01:15:27.472970 1922 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-6c85ff7dcf-4r5qm","calico-system/goldmane-768f4c5c69-mc8t4","calico-system/csi-node-driver-qpdzq","default/nginx-deployment-7fcdb87857-5272h","tigera-operator/tigera-operator-747864d56d-v7fpp","calico-system/calico-node-rfw6k","kube-system/kube-proxy-kdpw8"] Aug 13 01:15:27.746967 systemd-networkd[1460]: calibf104c57200: Link UP Aug 13 01:15:27.750717 systemd-networkd[1460]: calibf104c57200: Gained carrier Aug 13 01:15:27.789665 containerd[1549]: 2025-08-13 01:15:27.486 [INFO][3164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0 whisker-6c85ff7dcf- calico-system d6a1a47d-2556-4325-b483-facae1719336 4158 0 2025-08-13 01:13:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c85ff7dcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 192.168.178.99 whisker-6c85ff7dcf-4r5qm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibf104c57200 [] [] }} ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Namespace="calico-system" Pod="whisker-6c85ff7dcf-4r5qm" WorkloadEndpoint="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-" Aug 13 01:15:27.789665 containerd[1549]: 2025-08-13 01:15:27.487 [INFO][3164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Namespace="calico-system" Pod="whisker-6c85ff7dcf-4r5qm" WorkloadEndpoint="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:27.789665 containerd[1549]: 2025-08-13 01:15:27.600 [INFO][3179] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.600 [INFO][3179] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58e0), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.178.99", "pod":"whisker-6c85ff7dcf-4r5qm", "timestamp":"2025-08-13 01:15:27.600376528 +0000 UTC"}, Hostname:"192.168.178.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.600 [INFO][3179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.600 [INFO][3179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.601 [INFO][3179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.178.99' Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.636 [INFO][3179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" host="192.168.178.99" Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.643 [INFO][3179] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.178.99" Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.653 [INFO][3179] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.660 [INFO][3179] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.663 [INFO][3179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:27.789830 containerd[1549]: 2025-08-13 01:15:27.664 [INFO][3179] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" host="192.168.178.99" Aug 13 01:15:27.790222 containerd[1549]: 2025-08-13 01:15:27.677 [INFO][3179] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800 Aug 13 01:15:27.790222 containerd[1549]: 2025-08-13 01:15:27.712 [INFO][3179] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" host="192.168.178.99" Aug 13 01:15:27.790222 containerd[1549]: 2025-08-13 01:15:27.732 [INFO][3179] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.67/26] block=192.168.81.64/26 handle="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" host="192.168.178.99" Aug 13 01:15:27.790222 containerd[1549]: 2025-08-13 01:15:27.732 [INFO][3179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.67/26] handle="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" host="192.168.178.99" Aug 13 01:15:27.790222 containerd[1549]: 2025-08-13 01:15:27.732 [INFO][3179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:27.790222 containerd[1549]: 2025-08-13 01:15:27.732 [INFO][3179] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.67/26] IPv6=[] ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:27.790327 containerd[1549]: 2025-08-13 01:15:27.737 [INFO][3164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Namespace="calico-system" Pod="whisker-6c85ff7dcf-4r5qm" WorkloadEndpoint="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0", GenerateName:"whisker-6c85ff7dcf-", Namespace:"calico-system", SelfLink:"", UID:"d6a1a47d-2556-4325-b483-facae1719336", ResourceVersion:"4158", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c85ff7dcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.178.99", ContainerID:"", Pod:"whisker-6c85ff7dcf-4r5qm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf104c57200", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:27.790327 containerd[1549]: 2025-08-13 01:15:27.738 [INFO][3164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.67/32] ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Namespace="calico-system" Pod="whisker-6c85ff7dcf-4r5qm" WorkloadEndpoint="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:27.790393 containerd[1549]: 2025-08-13 01:15:27.738 [INFO][3164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf104c57200 ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Namespace="calico-system" Pod="whisker-6c85ff7dcf-4r5qm" WorkloadEndpoint="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:27.790393 containerd[1549]: 2025-08-13 01:15:27.752 [INFO][3164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Namespace="calico-system" Pod="whisker-6c85ff7dcf-4r5qm" WorkloadEndpoint="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:27.790434 containerd[1549]: 2025-08-13 01:15:27.756 [INFO][3164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Namespace="calico-system" Pod="whisker-6c85ff7dcf-4r5qm" WorkloadEndpoint="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0", GenerateName:"whisker-6c85ff7dcf-", Namespace:"calico-system", SelfLink:"", UID:"d6a1a47d-2556-4325-b483-facae1719336", ResourceVersion:"4158", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c85ff7dcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.178.99", ContainerID:"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800", Pod:"whisker-6c85ff7dcf-4r5qm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf104c57200", MAC:"d2:4b:75:ec:09:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:27.790479 containerd[1549]: 2025-08-13 01:15:27.785 [INFO][3164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Namespace="calico-system" Pod="whisker-6c85ff7dcf-4r5qm" WorkloadEndpoint="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:27.830011 containerd[1549]: time="2025-08-13T01:15:27.829979852Z" level=info msg="connecting to shim 87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" address="unix:///run/containerd/s/9d4e940a8c61f804014e730b314eb0e4f7f27c20e8b435f6b2745a559308b632" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:27.882849 systemd[1]: Started cri-containerd-87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800.scope - libcontainer container 87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800. Aug 13 01:15:27.915224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175188180.mount: Deactivated successfully. Aug 13 01:15:27.951874 containerd[1549]: time="2025-08-13T01:15:27.951847015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-4r5qm,Uid:d6a1a47d-2556-4325-b483-facae1719336,Namespace:calico-system,Attempt:0,} returns sandbox id \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\"" Aug 13 01:15:28.197809 systemd-networkd[1460]: cali46bb075125b: Gained IPv6LL Aug 13 01:15:28.309430 kubelet[1922]: E0813 01:15:28.309400 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:28.328725 containerd[1549]: time="2025-08-13T01:15:28.328670382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:28.329366 containerd[1549]: time="2025-08-13T01:15:28.329342546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 01:15:28.329940 containerd[1549]: time="2025-08-13T01:15:28.329887848Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:28.331604 containerd[1549]: time="2025-08-13T01:15:28.331297198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:28.332258 containerd[1549]: time="2025-08-13T01:15:28.331898155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 1.672170848s" Aug 13 01:15:28.332258 containerd[1549]: time="2025-08-13T01:15:28.331927289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 01:15:28.333880 containerd[1549]: time="2025-08-13T01:15:28.333860213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 01:15:28.334426 containerd[1549]: time="2025-08-13T01:15:28.334395506Z" level=info msg="CreateContainer within sandbox \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 01:15:28.341285 containerd[1549]: time="2025-08-13T01:15:28.341262044Z" level=info msg="Container 339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:28.349270 containerd[1549]: time="2025-08-13T01:15:28.349233577Z" level=info msg="CreateContainer within sandbox \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\"" Aug 13 01:15:28.349670 containerd[1549]: time="2025-08-13T01:15:28.349632240Z" level=info msg="StartContainer for \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\"" Aug 13 01:15:28.350462 containerd[1549]: time="2025-08-13T01:15:28.350426373Z" level=info msg="connecting to shim 339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11" address="unix:///run/containerd/s/1ec1be34fd44e9db739fea6c6fed069de9b42ea2f1fa8910fc63eeb011ba9820" protocol=ttrpc version=3 Aug 13 01:15:28.372689 systemd[1]: Started cri-containerd-339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11.scope - libcontainer container 339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11. Aug 13 01:15:28.424667 containerd[1549]: time="2025-08-13T01:15:28.424635183Z" level=info msg="StartContainer for \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" returns successfully" Aug 13 01:15:28.559543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011608402.mount: Deactivated successfully. Aug 13 01:15:28.990804 containerd[1549]: time="2025-08-13T01:15:28.990732453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:28.991465 containerd[1549]: time="2025-08-13T01:15:28.991404688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 01:15:28.991950 containerd[1549]: time="2025-08-13T01:15:28.991924778Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:28.994317 containerd[1549]: time="2025-08-13T01:15:28.993416706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:28.994317 containerd[1549]: time="2025-08-13T01:15:28.994200711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 659.938192ms" Aug 13 01:15:28.994317 containerd[1549]: time="2025-08-13T01:15:28.994226412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 01:15:28.996466 containerd[1549]: time="2025-08-13T01:15:28.996442916Z" level=info msg="CreateContainer within sandbox \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 01:15:29.004668 containerd[1549]: time="2025-08-13T01:15:29.004647369Z" level=info msg="Container d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:29.008521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369045017.mount: Deactivated successfully. Aug 13 01:15:29.013416 containerd[1549]: time="2025-08-13T01:15:29.013380739Z" level=info msg="CreateContainer within sandbox \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\"" Aug 13 01:15:29.013853 containerd[1549]: time="2025-08-13T01:15:29.013820295Z" level=info msg="StartContainer for \"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\"" Aug 13 01:15:29.014728 containerd[1549]: time="2025-08-13T01:15:29.014695694Z" level=info msg="connecting to shim d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc" address="unix:///run/containerd/s/9d4e940a8c61f804014e730b314eb0e4f7f27c20e8b435f6b2745a559308b632" protocol=ttrpc version=3 Aug 13 01:15:29.038687 systemd[1]: Started cri-containerd-d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc.scope - libcontainer container d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc. Aug 13 01:15:29.085309 containerd[1549]: time="2025-08-13T01:15:29.085262365Z" level=info msg="StartContainer for \"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\" returns successfully" Aug 13 01:15:29.087593 containerd[1549]: time="2025-08-13T01:15:29.087497455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 01:15:29.098428 systemd[1]: cri-containerd-d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc.scope: Deactivated successfully. Aug 13 01:15:29.100904 containerd[1549]: time="2025-08-13T01:15:29.100832463Z" level=info msg="received exit event container_id:\"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\" id:\"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\" pid:3301 exit_status:1 exited_at:{seconds:1755047729 nanos:100650123}" Aug 13 01:15:29.100904 containerd[1549]: time="2025-08-13T01:15:29.100881981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\" id:\"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\" pid:3301 exit_status:1 exited_at:{seconds:1755047729 nanos:100650123}" Aug 13 01:15:29.120011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc-rootfs.mount: Deactivated successfully. Aug 13 01:15:29.310684 kubelet[1922]: E0813 01:15:29.310551 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:29.413185 systemd-networkd[1460]: calibf104c57200: Gained IPv6LL Aug 13 01:15:29.475138 kubelet[1922]: I0813 01:15:29.475111 1922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:15:29.475427 kubelet[1922]: I0813 01:15:29.475150 1922 scope.go:117] "RemoveContainer" containerID="d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc" Aug 13 01:15:29.477247 containerd[1549]: time="2025-08-13T01:15:29.477151436Z" level=info msg="RemoveContainer for \"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\"" Aug 13 01:15:29.481833 containerd[1549]: time="2025-08-13T01:15:29.481749294Z" level=info msg="RemoveContainer for \"d778c6e79ded7b0836ba51b75870c57d468562ecf503f7101140a3cccdd708cc\" returns successfully" Aug 13 01:15:29.896011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436042246.mount: Deactivated successfully. Aug 13 01:15:29.898351 containerd[1549]: time="2025-08-13T01:15:29.898311294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount436042246: write /var/lib/containerd/tmpmounts/containerd-mount436042246/whisker-backend: no space left on device" Aug 13 01:15:29.898510 containerd[1549]: time="2025-08-13T01:15:29.898407498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 01:15:29.898706 kubelet[1922]: E0813 01:15:29.898668 1922 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount436042246: write /var/lib/containerd/tmpmounts/containerd-mount436042246/whisker-backend: no space left on device" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.2" Aug 13 01:15:29.898816 kubelet[1922]: E0813 01:15:29.898733 1922 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount436042246: write /var/lib/containerd/tmpmounts/containerd-mount436042246/whisker-backend: no space left on device" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.2" Aug 13 01:15:29.899982 kubelet[1922]: E0813 01:15:29.899935 1922 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s2ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c85ff7dcf-4r5qm_calico-system(d6a1a47d-2556-4325-b483-facae1719336): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount436042246: write /var/lib/containerd/tmpmounts/containerd-mount436042246/whisker-backend: no space left on device" logger="UnhandledError" Aug 13 01:15:29.901224 kubelet[1922]: E0813 01:15:29.901192 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\\\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount436042246: write /var/lib/containerd/tmpmounts/containerd-mount436042246/whisker-backend: no space left on device\"" pod="calico-system/whisker-6c85ff7dcf-4r5qm" podUID="d6a1a47d-2556-4325-b483-facae1719336" Aug 13 01:15:30.311766 kubelet[1922]: E0813 01:15:30.311669 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:30.382496 containerd[1549]: time="2025-08-13T01:15:30.382462812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpdzq,Uid:706cc235-88f9-4461-aacc-ad5d00a0de1c,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:30.479884 containerd[1549]: time="2025-08-13T01:15:30.479822106Z" level=info msg="StopPodSandbox for \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\"" Aug 13 01:15:30.487419 systemd[1]: cri-containerd-87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800.scope: Deactivated successfully. Aug 13 01:15:30.490424 kubelet[1922]: I0813 01:15:30.490359 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-mc8t4" podStartSLOduration=78.816537744 podStartE2EDuration="1m20.490343282s" podCreationTimestamp="2025-08-13 01:14:10 +0000 UTC" firstStartedPulling="2025-08-13 01:15:26.659302866 +0000 UTC m=+21.844496850" lastFinishedPulling="2025-08-13 01:15:28.333108404 +0000 UTC m=+23.518302388" observedRunningTime="2025-08-13 01:15:28.571563447 +0000 UTC m=+23.756757441" watchObservedRunningTime="2025-08-13 01:15:30.490343282 +0000 UTC m=+25.675537266" Aug 13 01:15:30.493630 containerd[1549]: time="2025-08-13T01:15:30.493549751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" id:\"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" pid:3229 exit_status:137 exited_at:{seconds:1755047730 nanos:493283448}" Aug 13 01:15:30.498191 systemd-networkd[1460]: califbfa7322f0d: Link UP Aug 13 01:15:30.499790 systemd-networkd[1460]: califbfa7322f0d: Gained carrier Aug 13 01:15:30.528423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800-rootfs.mount: Deactivated successfully. Aug 13 01:15:30.532114 containerd[1549]: time="2025-08-13T01:15:30.531959218Z" level=info msg="shim disconnected" id=87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800 namespace=k8s.io Aug 13 01:15:30.532114 containerd[1549]: time="2025-08-13T01:15:30.531984536Z" level=warning msg="cleaning up after shim disconnected" id=87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800 namespace=k8s.io Aug 13 01:15:30.532114 containerd[1549]: time="2025-08-13T01:15:30.531992392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:15:30.533661 containerd[1549]: time="2025-08-13T01:15:30.532097758Z" level=info msg="received exit event sandbox_id:\"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" exit_status:137 exited_at:{seconds:1755047730 nanos:493283448}" Aug 13 01:15:30.535755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800-shm.mount: Deactivated successfully. Aug 13 01:15:30.538246 containerd[1549]: 2025-08-13 01:15:30.424 [INFO][3338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.178.99-k8s-csi--node--driver--qpdzq-eth0 csi-node-driver- calico-system 706cc235-88f9-4461-aacc-ad5d00a0de1c 4087 0 2025-08-13 01:15:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 192.168.178.99 csi-node-driver-qpdzq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califbfa7322f0d [] [] }} ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Namespace="calico-system" Pod="csi-node-driver-qpdzq" WorkloadEndpoint="192.168.178.99-k8s-csi--node--driver--qpdzq-" Aug 13 01:15:30.538246 containerd[1549]: 2025-08-13 01:15:30.425 [INFO][3338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Namespace="calico-system" Pod="csi-node-driver-qpdzq" WorkloadEndpoint="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" Aug 13 01:15:30.538246 containerd[1549]: 2025-08-13 01:15:30.450 [INFO][3350] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" HandleID="k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Workload="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.451 [INFO][3350] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" HandleID="k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Workload="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.178.99", "pod":"csi-node-driver-qpdzq", "timestamp":"2025-08-13 01:15:30.450258013 +0000 UTC"}, Hostname:"192.168.178.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.451 [INFO][3350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.451 [INFO][3350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.451 [INFO][3350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.178.99' Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.459 [INFO][3350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" host="192.168.178.99" Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.465 [INFO][3350] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.178.99" Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.470 [INFO][3350] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.472 [INFO][3350] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.474 [INFO][3350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="192.168.178.99" Aug 13 01:15:30.538379 containerd[1549]: 2025-08-13 01:15:30.474 [INFO][3350] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" host="192.168.178.99" Aug 13 01:15:30.538653 containerd[1549]: 2025-08-13 01:15:30.476 [INFO][3350] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e Aug 13 01:15:30.538653 containerd[1549]: 2025-08-13 01:15:30.479 [INFO][3350] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" host="192.168.178.99" Aug 13 01:15:30.538653 containerd[1549]: 2025-08-13 01:15:30.487 [INFO][3350] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.68/26] block=192.168.81.64/26 handle="k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" host="192.168.178.99" Aug 13 01:15:30.538653 containerd[1549]: 2025-08-13 01:15:30.487 [INFO][3350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.68/26] handle="k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" host="192.168.178.99" Aug 13 01:15:30.538653 containerd[1549]: 2025-08-13 01:15:30.487 [INFO][3350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:30.538653 containerd[1549]: 2025-08-13 01:15:30.488 [INFO][3350] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.68/26] IPv6=[] ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" HandleID="k8s-pod-network.26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Workload="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" Aug 13 01:15:30.538766 containerd[1549]: 2025-08-13 01:15:30.491 [INFO][3338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Namespace="calico-system" Pod="csi-node-driver-qpdzq" WorkloadEndpoint="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.178.99-k8s-csi--node--driver--qpdzq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"706cc235-88f9-4461-aacc-ad5d00a0de1c", ResourceVersion:"4087", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 15, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.178.99", ContainerID:"", Pod:"csi-node-driver-qpdzq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbfa7322f0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:30.538817 containerd[1549]: 2025-08-13 01:15:30.491 [INFO][3338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.68/32] ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Namespace="calico-system" Pod="csi-node-driver-qpdzq" WorkloadEndpoint="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" Aug 13 01:15:30.538817 containerd[1549]: 2025-08-13 01:15:30.492 [INFO][3338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbfa7322f0d ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Namespace="calico-system" Pod="csi-node-driver-qpdzq" WorkloadEndpoint="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" Aug 13 01:15:30.538817 containerd[1549]: 2025-08-13 01:15:30.501 [INFO][3338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Namespace="calico-system" Pod="csi-node-driver-qpdzq" WorkloadEndpoint="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" Aug 13 01:15:30.538911 containerd[1549]: 2025-08-13 01:15:30.501 [INFO][3338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Namespace="calico-system" Pod="csi-node-driver-qpdzq" WorkloadEndpoint="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.178.99-k8s-csi--node--driver--qpdzq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"706cc235-88f9-4461-aacc-ad5d00a0de1c", ResourceVersion:"4087", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 15, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.178.99", ContainerID:"26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e", Pod:"csi-node-driver-qpdzq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbfa7322f0d", MAC:"46:b5:02:26:16:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:30.538961 containerd[1549]: 2025-08-13 01:15:30.534 [INFO][3338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" Namespace="calico-system" Pod="csi-node-driver-qpdzq" WorkloadEndpoint="192.168.178.99-k8s-csi--node--driver--qpdzq-eth0" Aug 13 01:15:30.564155 containerd[1549]: time="2025-08-13T01:15:30.563599983Z" level=info msg="connecting to shim 26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e" address="unix:///run/containerd/s/5445730f9680effa959fd9cd3b559788142d4051a13298f0a1b678f559490204" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:30.588703 systemd[1]: Started cri-containerd-26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e.scope - libcontainer container 26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e. Aug 13 01:15:30.620747 systemd-networkd[1460]: calibf104c57200: Link DOWN Aug 13 01:15:30.621200 systemd-networkd[1460]: calibf104c57200: Lost carrier Aug 13 01:15:30.626129 containerd[1549]: time="2025-08-13T01:15:30.625912590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpdzq,Uid:706cc235-88f9-4461-aacc-ad5d00a0de1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e\"" Aug 13 01:15:30.631763 containerd[1549]: time="2025-08-13T01:15:30.631714365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.618 [INFO][3411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.618 [INFO][3411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" iface="eth0" netns="/var/run/netns/cni-e0adce49-1847-4461-f0ff-0ec53f9b7e04" Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.619 [INFO][3411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" iface="eth0" netns="/var/run/netns/cni-e0adce49-1847-4461-f0ff-0ec53f9b7e04" Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.629 [INFO][3411] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" after=10.617646ms iface="eth0" netns="/var/run/netns/cni-e0adce49-1847-4461-f0ff-0ec53f9b7e04" Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.629 [INFO][3411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.629 [INFO][3411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.654 [INFO][3467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.654 [INFO][3467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:30.767535 containerd[1549]: 2025-08-13 01:15:30.655 [INFO][3467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:30.767944 containerd[1549]: 2025-08-13 01:15:30.759 [INFO][3467] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:30.767944 containerd[1549]: 2025-08-13 01:15:30.759 [INFO][3467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:30.767944 containerd[1549]: 2025-08-13 01:15:30.763 [INFO][3467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:30.767944 containerd[1549]: 2025-08-13 01:15:30.765 [INFO][3411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:30.768723 containerd[1549]: time="2025-08-13T01:15:30.768677801Z" level=info msg="TearDown network for sandbox \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" successfully" Aug 13 01:15:30.768723 containerd[1549]: time="2025-08-13T01:15:30.768703780Z" level=info msg="StopPodSandbox for \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" returns successfully" Aug 13 01:15:30.770066 systemd[1]: run-netns-cni\x2de0adce49\x2d1847\x2d4461\x2df0ff\x2d0ec53f9b7e04.mount: Deactivated successfully. Aug 13 01:15:30.774391 kubelet[1922]: I0813 01:15:30.774335 1922 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-6c85ff7dcf-4r5qm" Aug 13 01:15:30.774391 kubelet[1922]: I0813 01:15:30.774355 1922 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-6c85ff7dcf-4r5qm"] Aug 13 01:15:30.878425 kubelet[1922]: I0813 01:15:30.878131 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s2ww\" (UniqueName: \"kubernetes.io/projected/d6a1a47d-2556-4325-b483-facae1719336-kube-api-access-5s2ww\") pod \"d6a1a47d-2556-4325-b483-facae1719336\" (UID: \"d6a1a47d-2556-4325-b483-facae1719336\") " Aug 13 01:15:30.878425 kubelet[1922]: I0813 01:15:30.878164 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6a1a47d-2556-4325-b483-facae1719336-whisker-ca-bundle\") pod \"d6a1a47d-2556-4325-b483-facae1719336\" (UID: \"d6a1a47d-2556-4325-b483-facae1719336\") " Aug 13 01:15:30.878425 kubelet[1922]: I0813 01:15:30.878182 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6a1a47d-2556-4325-b483-facae1719336-whisker-backend-key-pair\") pod \"d6a1a47d-2556-4325-b483-facae1719336\" (UID: \"d6a1a47d-2556-4325-b483-facae1719336\") " Aug 13 01:15:30.880066 kubelet[1922]: I0813 01:15:30.879862 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a1a47d-2556-4325-b483-facae1719336-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d6a1a47d-2556-4325-b483-facae1719336" (UID: "d6a1a47d-2556-4325-b483-facae1719336"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:15:30.882039 kubelet[1922]: I0813 01:15:30.882020 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a1a47d-2556-4325-b483-facae1719336-kube-api-access-5s2ww" (OuterVolumeSpecName: "kube-api-access-5s2ww") pod "d6a1a47d-2556-4325-b483-facae1719336" (UID: "d6a1a47d-2556-4325-b483-facae1719336"). InnerVolumeSpecName "kube-api-access-5s2ww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:15:30.883031 kubelet[1922]: I0813 01:15:30.883005 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a1a47d-2556-4325-b483-facae1719336-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d6a1a47d-2556-4325-b483-facae1719336" (UID: "d6a1a47d-2556-4325-b483-facae1719336"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:15:30.884680 systemd[1]: var-lib-kubelet-pods-d6a1a47d\x2d2556\x2d4325\x2db483\x2dfacae1719336-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5s2ww.mount: Deactivated successfully. Aug 13 01:15:30.884869 systemd[1]: var-lib-kubelet-pods-d6a1a47d\x2d2556\x2d4325\x2db483\x2dfacae1719336-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:15:30.980295 kubelet[1922]: I0813 01:15:30.980260 1922 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6a1a47d-2556-4325-b483-facae1719336-whisker-ca-bundle\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:30.980295 kubelet[1922]: I0813 01:15:30.980282 1922 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6a1a47d-2556-4325-b483-facae1719336-whisker-backend-key-pair\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:30.980295 kubelet[1922]: I0813 01:15:30.980291 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5s2ww\" (UniqueName: \"kubernetes.io/projected/d6a1a47d-2556-4325-b483-facae1719336-kube-api-access-5s2ww\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:31.273620 containerd[1549]: time="2025-08-13T01:15:31.273421358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:31.274144 containerd[1549]: time="2025-08-13T01:15:31.274119606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:15:31.275207 containerd[1549]: time="2025-08-13T01:15:31.274600735Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:31.276287 containerd[1549]: time="2025-08-13T01:15:31.276253134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:31.276943 containerd[1549]: time="2025-08-13T01:15:31.276913906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 645.160734ms" Aug 13 01:15:31.277017 containerd[1549]: time="2025-08-13T01:15:31.277003257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:15:31.279108 containerd[1549]: time="2025-08-13T01:15:31.279084781Z" level=info msg="CreateContainer within sandbox \"26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:15:31.293656 containerd[1549]: time="2025-08-13T01:15:31.292812556Z" level=info msg="Container 225114ae3f56978325eca5eb8ed24672ee0c3b997afb679c7d2f241c735c9ccd: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:31.297186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185677835.mount: Deactivated successfully. Aug 13 01:15:31.298936 containerd[1549]: time="2025-08-13T01:15:31.298866395Z" level=info msg="CreateContainer within sandbox \"26993810d26950b29db5cfa0a993ceb22f0ab16c2bbc878815f322a8529cab7e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"225114ae3f56978325eca5eb8ed24672ee0c3b997afb679c7d2f241c735c9ccd\"" Aug 13 01:15:31.299607 containerd[1549]: time="2025-08-13T01:15:31.299272212Z" level=info msg="StartContainer for \"225114ae3f56978325eca5eb8ed24672ee0c3b997afb679c7d2f241c735c9ccd\"" Aug 13 01:15:31.300808 containerd[1549]: time="2025-08-13T01:15:31.300761430Z" level=info msg="connecting to shim 225114ae3f56978325eca5eb8ed24672ee0c3b997afb679c7d2f241c735c9ccd" address="unix:///run/containerd/s/5445730f9680effa959fd9cd3b559788142d4051a13298f0a1b678f559490204" protocol=ttrpc version=3 Aug 13 01:15:31.312908 kubelet[1922]: E0813 01:15:31.312823 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:31.321702 systemd[1]: Started cri-containerd-225114ae3f56978325eca5eb8ed24672ee0c3b997afb679c7d2f241c735c9ccd.scope - libcontainer container 225114ae3f56978325eca5eb8ed24672ee0c3b997afb679c7d2f241c735c9ccd. Aug 13 01:15:31.362453 containerd[1549]: time="2025-08-13T01:15:31.362288695Z" level=info msg="StartContainer for \"225114ae3f56978325eca5eb8ed24672ee0c3b997afb679c7d2f241c735c9ccd\" returns successfully" Aug 13 01:15:31.364165 containerd[1549]: time="2025-08-13T01:15:31.364143824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:15:31.391441 systemd[1]: Removed slice kubepods-besteffort-podd6a1a47d_2556_4325_b483_facae1719336.slice - libcontainer container kubepods-besteffort-podd6a1a47d_2556_4325_b483_facae1719336.slice. Aug 13 01:15:31.588897 systemd-networkd[1460]: califbfa7322f0d: Gained IPv6LL Aug 13 01:15:31.775247 kubelet[1922]: I0813 01:15:31.775183 1922 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-6c85ff7dcf-4r5qm"] Aug 13 01:15:31.790161 kubelet[1922]: I0813 01:15:31.790135 1922 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:31.790211 kubelet[1922]: I0813 01:15:31.790171 1922 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:31.791482 containerd[1549]: time="2025-08-13T01:15:31.791441849Z" level=info msg="StopPodSandbox for \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\"" Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.833 [INFO][3538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.833 [INFO][3538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" iface="eth0" netns="" Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.833 [INFO][3538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.833 [INFO][3538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.852 [INFO][3545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.852 [INFO][3545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.852 [INFO][3545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.860 [WARNING][3545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.861 [INFO][3545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:31.873517 containerd[1549]: 2025-08-13 01:15:31.864 [INFO][3545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:31.874464 containerd[1549]: 2025-08-13 01:15:31.869 [INFO][3538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:31.874464 containerd[1549]: time="2025-08-13T01:15:31.873641796Z" level=info msg="TearDown network for sandbox \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" successfully" Aug 13 01:15:31.874464 containerd[1549]: time="2025-08-13T01:15:31.873666493Z" level=info msg="StopPodSandbox for \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" returns successfully" Aug 13 01:15:31.875271 containerd[1549]: time="2025-08-13T01:15:31.874975068Z" level=info msg="RemovePodSandbox for \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\"" Aug 13 01:15:31.875271 containerd[1549]: time="2025-08-13T01:15:31.875005009Z" level=info msg="Forcibly stopping sandbox \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\"" Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:31.965 [INFO][3560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:31.965 [INFO][3560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" iface="eth0" netns="" Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:31.965 [INFO][3560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:31.965 [INFO][3560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:32.011 [INFO][3567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:32.012 [INFO][3567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:32.012 [INFO][3567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:32.020 [WARNING][3567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:32.020 [INFO][3567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" HandleID="k8s-pod-network.87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Workload="192.168.178.99-k8s-whisker--6c85ff7dcf--4r5qm-eth0" Aug 13 01:15:32.033893 containerd[1549]: 2025-08-13 01:15:32.022 [INFO][3567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:32.034391 containerd[1549]: 2025-08-13 01:15:32.028 [INFO][3560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800" Aug 13 01:15:32.034760 containerd[1549]: time="2025-08-13T01:15:32.034516111Z" level=info msg="TearDown network for sandbox \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" successfully" Aug 13 01:15:32.038587 containerd[1549]: time="2025-08-13T01:15:32.038431514Z" level=info msg="Ensure that sandbox 87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800 in task-service has been cleanup successfully" Aug 13 01:15:32.042808 containerd[1549]: time="2025-08-13T01:15:32.042763485Z" level=info msg="RemovePodSandbox \"87ee69efa5320cb2d1078ff2cdb6afae6edddab52b5c05b3d9ec46a23d6ad800\" returns successfully" Aug 13 01:15:32.043643 kubelet[1922]: I0813 01:15:32.043628 1922 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:32.065803 kubelet[1922]: I0813 01:15:32.065685 1922 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:32.065803 kubelet[1922]: I0813 01:15:32.065759 1922 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["default/nginx-deployment-7fcdb87857-5272h","tigera-operator/tigera-operator-747864d56d-v7fpp","calico-system/goldmane-768f4c5c69-mc8t4","calico-system/calico-node-rfw6k","kube-system/kube-proxy-kdpw8","calico-system/csi-node-driver-qpdzq"] Aug 13 01:15:32.067946 containerd[1549]: time="2025-08-13T01:15:32.067783660Z" level=info msg="StopContainer for \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" with timeout 2 (s)" Aug 13 01:15:32.069662 containerd[1549]: time="2025-08-13T01:15:32.069593201Z" level=info msg="Stop container \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" with signal quit" Aug 13 01:15:32.096484 systemd[1]: cri-containerd-7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd.scope: Deactivated successfully. Aug 13 01:15:32.099882 containerd[1549]: time="2025-08-13T01:15:32.099753731Z" level=info msg="received exit event container_id:\"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" id:\"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" pid:2984 exited_at:{seconds:1755047732 nanos:99044722}" Aug 13 01:15:32.100450 containerd[1549]: time="2025-08-13T01:15:32.100433381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" id:\"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" pid:2984 exited_at:{seconds:1755047732 nanos:99044722}" Aug 13 01:15:32.144854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd-rootfs.mount: Deactivated successfully. Aug 13 01:15:32.185178 containerd[1549]: time="2025-08-13T01:15:32.185142666Z" level=info msg="StopContainer for \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" returns successfully" Aug 13 01:15:32.187163 containerd[1549]: time="2025-08-13T01:15:32.186978724Z" level=info msg="StopPodSandbox for \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\"" Aug 13 01:15:32.187896 containerd[1549]: time="2025-08-13T01:15:32.187665619Z" level=info msg="Container to stop \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:15:32.207687 systemd[1]: cri-containerd-e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf.scope: Deactivated successfully. Aug 13 01:15:32.216158 containerd[1549]: time="2025-08-13T01:15:32.216113040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" id:\"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" pid:2737 exit_status:137 exited_at:{seconds:1755047732 nanos:215788561}" Aug 13 01:15:32.265105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf-rootfs.mount: Deactivated successfully. Aug 13 01:15:32.267439 containerd[1549]: time="2025-08-13T01:15:32.267384747Z" level=info msg="shim disconnected" id=e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf namespace=k8s.io Aug 13 01:15:32.267439 containerd[1549]: time="2025-08-13T01:15:32.267411144Z" level=warning msg="cleaning up after shim disconnected" id=e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf namespace=k8s.io Aug 13 01:15:32.267439 containerd[1549]: time="2025-08-13T01:15:32.267419009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:15:32.287795 containerd[1549]: time="2025-08-13T01:15:32.287733620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:15:32.288096 containerd[1549]: time="2025-08-13T01:15:32.287722463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/40/fs/usr/bin/node-driver-registrar: no space left on device" Aug 13 01:15:32.288396 kubelet[1922]: E0813 01:15:32.288345 1922 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/40/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:15:32.288473 kubelet[1922]: E0813 01:15:32.288401 1922 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/40/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:15:32.288674 kubelet[1922]: E0813 01:15:32.288526 1922 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2f9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpdzq_calico-system(706cc235-88f9-4461-aacc-ad5d00a0de1c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/40/fs/usr/bin/node-driver-registrar: no space left on device" logger="UnhandledError" Aug 13 01:15:32.289831 kubelet[1922]: E0813 01:15:32.289769 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/40/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-qpdzq" podUID="706cc235-88f9-4461-aacc-ad5d00a0de1c" Aug 13 01:15:32.293582 containerd[1549]: time="2025-08-13T01:15:32.291729945Z" level=info msg="received exit event sandbox_id:\"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" exit_status:137 exited_at:{seconds:1755047732 nanos:215788561}" Aug 13 01:15:32.293456 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf-shm.mount: Deactivated successfully. Aug 13 01:15:32.313759 kubelet[1922]: E0813 01:15:32.313616 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:32.403734 systemd-networkd[1460]: calif2dab148ba3: Link DOWN Aug 13 01:15:32.403746 systemd-networkd[1460]: calif2dab148ba3: Lost carrier Aug 13 01:15:32.491250 kubelet[1922]: I0813 01:15:32.491210 1922 scope.go:117] "RemoveContainer" containerID="7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd" Aug 13 01:15:32.492378 kubelet[1922]: E0813 01:15:32.492119 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/40/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-qpdzq" podUID="706cc235-88f9-4461-aacc-ad5d00a0de1c" Aug 13 01:15:32.492793 containerd[1549]: time="2025-08-13T01:15:32.492766870Z" level=info msg="RemoveContainer for \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\"" Aug 13 01:15:32.495644 containerd[1549]: time="2025-08-13T01:15:32.495614352Z" level=info msg="RemoveContainer for \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" returns successfully" Aug 13 01:15:32.495840 kubelet[1922]: I0813 01:15:32.495772 1922 scope.go:117] "RemoveContainer" containerID="7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd" Aug 13 01:15:32.495994 containerd[1549]: time="2025-08-13T01:15:32.495968511Z" level=error msg="ContainerStatus for \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\": not found" Aug 13 01:15:32.496221 kubelet[1922]: E0813 01:15:32.496196 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\": not found" containerID="7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd" Aug 13 01:15:32.496280 kubelet[1922]: I0813 01:15:32.496246 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd"} err="failed to get container status \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\": not found" Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.402 [INFO][3646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.402 [INFO][3646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" iface="eth0" netns="/var/run/netns/cni-87043cd6-a06a-5e24-1820-5b09308c7933" Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.402 [INFO][3646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" iface="eth0" netns="/var/run/netns/cni-87043cd6-a06a-5e24-1820-5b09308c7933" Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.409 [INFO][3646] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" after=7.069753ms iface="eth0" netns="/var/run/netns/cni-87043cd6-a06a-5e24-1820-5b09308c7933" Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.409 [INFO][3646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.409 [INFO][3646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.431 [INFO][3653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.432 [INFO][3653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:32.537460 containerd[1549]: 2025-08-13 01:15:32.432 [INFO][3653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:32.537815 containerd[1549]: 2025-08-13 01:15:32.530 [INFO][3653] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:32.537815 containerd[1549]: 2025-08-13 01:15:32.530 [INFO][3653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:32.537815 containerd[1549]: 2025-08-13 01:15:32.532 [INFO][3653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:32.537815 containerd[1549]: 2025-08-13 01:15:32.535 [INFO][3646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:32.538116 containerd[1549]: time="2025-08-13T01:15:32.537997769Z" level=info msg="TearDown network for sandbox \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" successfully" Aug 13 01:15:32.538116 containerd[1549]: time="2025-08-13T01:15:32.538017252Z" level=info msg="StopPodSandbox for \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" returns successfully" Aug 13 01:15:32.540211 systemd[1]: run-netns-cni\x2d87043cd6\x2da06a\x2d5e24\x2d1820\x2d5b09308c7933.mount: Deactivated successfully. Aug 13 01:15:32.546616 kubelet[1922]: I0813 01:15:32.546586 1922 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="default/nginx-deployment-7fcdb87857-5272h" Aug 13 01:15:32.546616 kubelet[1922]: I0813 01:15:32.546610 1922 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["default/nginx-deployment-7fcdb87857-5272h"] Aug 13 01:15:32.573334 kubelet[1922]: I0813 01:15:32.573299 1922 scope.go:117] "RemoveContainer" containerID="7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd" Aug 13 01:15:32.573615 containerd[1549]: time="2025-08-13T01:15:32.573488306Z" level=error msg="ContainerStatus for \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\": not found" Aug 13 01:15:32.573801 kubelet[1922]: I0813 01:15:32.573763 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd"} err="failed to get container status \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7568a9c0ab27827bf825253b3ff8c29ab9ca4b12672ae7b6841826acdcc46fbd\": not found" Aug 13 01:15:32.590429 kubelet[1922]: I0813 01:15:32.590145 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9df\" (UniqueName: \"kubernetes.io/projected/c70154b0-6d90-4dae-8732-d244ece81fb7-kube-api-access-hr9df\") pod \"c70154b0-6d90-4dae-8732-d244ece81fb7\" (UID: \"c70154b0-6d90-4dae-8732-d244ece81fb7\") " Aug 13 01:15:32.593224 kubelet[1922]: I0813 01:15:32.593191 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70154b0-6d90-4dae-8732-d244ece81fb7-kube-api-access-hr9df" (OuterVolumeSpecName: "kube-api-access-hr9df") pod "c70154b0-6d90-4dae-8732-d244ece81fb7" (UID: "c70154b0-6d90-4dae-8732-d244ece81fb7"). InnerVolumeSpecName "kube-api-access-hr9df". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:15:32.690795 kubelet[1922]: I0813 01:15:32.690624 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hr9df\" (UniqueName: \"kubernetes.io/projected/c70154b0-6d90-4dae-8732-d244ece81fb7-kube-api-access-hr9df\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:32.759550 systemd[1]: var-lib-kubelet-pods-c70154b0\x2d6d90\x2d4dae\x2d8732\x2dd244ece81fb7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr9df.mount: Deactivated successfully. Aug 13 01:15:33.314365 kubelet[1922]: E0813 01:15:33.314296 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:33.392223 systemd[1]: Removed slice kubepods-besteffort-podc70154b0_6d90_4dae_8732_d244ece81fb7.slice - libcontainer container kubepods-besteffort-podc70154b0_6d90_4dae_8732_d244ece81fb7.slice. Aug 13 01:15:33.547069 kubelet[1922]: I0813 01:15:33.547037 1922 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["default/nginx-deployment-7fcdb87857-5272h"] Aug 13 01:15:33.555821 kubelet[1922]: I0813 01:15:33.555800 1922 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:33.555901 kubelet[1922]: I0813 01:15:33.555835 1922 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:33.556948 containerd[1549]: time="2025-08-13T01:15:33.556907002Z" level=info msg="StopPodSandbox for \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\"" Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.593 [INFO][3674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.593 [INFO][3674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" iface="eth0" netns="" Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.594 [INFO][3674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.594 [INFO][3674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.613 [INFO][3681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.613 [INFO][3681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.613 [INFO][3681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.620 [WARNING][3681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.621 [INFO][3681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:33.630424 containerd[1549]: 2025-08-13 01:15:33.626 [INFO][3681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:33.630753 containerd[1549]: 2025-08-13 01:15:33.628 [INFO][3674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:33.630753 containerd[1549]: time="2025-08-13T01:15:33.630409112Z" level=info msg="TearDown network for sandbox \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" successfully" Aug 13 01:15:33.630915 containerd[1549]: time="2025-08-13T01:15:33.630837274Z" level=info msg="StopPodSandbox for \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" returns successfully" Aug 13 01:15:33.631462 containerd[1549]: time="2025-08-13T01:15:33.631436882Z" level=info msg="RemovePodSandbox for \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\"" Aug 13 01:15:33.631551 containerd[1549]: time="2025-08-13T01:15:33.631522274Z" level=info msg="Forcibly stopping sandbox \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\"" Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.678 [INFO][3695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.678 [INFO][3695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" iface="eth0" netns="" Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.678 [INFO][3695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.678 [INFO][3695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.697 [INFO][3702] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.697 [INFO][3702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.697 [INFO][3702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.707 [WARNING][3702] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.707 [INFO][3702] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" HandleID="k8s-pod-network.e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Workload="192.168.178.99-k8s-nginx--deployment--7fcdb87857--5272h-eth0" Aug 13 01:15:33.713139 containerd[1549]: 2025-08-13 01:15:33.709 [INFO][3702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:33.713789 containerd[1549]: 2025-08-13 01:15:33.711 [INFO][3695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf" Aug 13 01:15:33.713789 containerd[1549]: time="2025-08-13T01:15:33.713189654Z" level=info msg="TearDown network for sandbox \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" successfully" Aug 13 01:15:33.714805 containerd[1549]: time="2025-08-13T01:15:33.714781128Z" level=info msg="Ensure that sandbox e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf in task-service has been cleanup successfully" Aug 13 01:15:33.717854 containerd[1549]: time="2025-08-13T01:15:33.717758692Z" level=info msg="RemovePodSandbox \"e2d7969e74d4f23b682b311a113fe3f5abb76ec6d22c7f2d6d65dbe29065b5bf\" returns successfully" Aug 13 01:15:33.718421 kubelet[1922]: I0813 01:15:33.718400 1922 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:33.732911 kubelet[1922]: I0813 01:15:33.732872 1922 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:33.732991 kubelet[1922]: I0813 01:15:33.732954 1922 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["tigera-operator/tigera-operator-747864d56d-v7fpp","calico-system/goldmane-768f4c5c69-mc8t4","calico-system/calico-node-rfw6k","kube-system/kube-proxy-kdpw8","calico-system/csi-node-driver-qpdzq"] Aug 13 01:15:33.733771 containerd[1549]: time="2025-08-13T01:15:33.733649953Z" level=info msg="StopContainer for \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" with timeout 2 (s)" Aug 13 01:15:33.733990 containerd[1549]: time="2025-08-13T01:15:33.733972750Z" level=info msg="Stop container \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" with signal terminated" Aug 13 01:15:33.810756 systemd[1]: cri-containerd-b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796.scope: Deactivated successfully. Aug 13 01:15:33.811240 systemd[1]: cri-containerd-b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796.scope: Consumed 768ms CPU time, 68.2M memory peak. Aug 13 01:15:33.812430 containerd[1549]: time="2025-08-13T01:15:33.812393832Z" level=info msg="received exit event container_id:\"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" id:\"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" pid:2403 exited_at:{seconds:1755047733 nanos:811717699}" Aug 13 01:15:33.813055 containerd[1549]: time="2025-08-13T01:15:33.813001234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" id:\"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" pid:2403 exited_at:{seconds:1755047733 nanos:811717699}" Aug 13 01:15:33.835739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796-rootfs.mount: Deactivated successfully. Aug 13 01:15:33.841353 containerd[1549]: time="2025-08-13T01:15:33.841315153Z" level=info msg="StopContainer for \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" returns successfully" Aug 13 01:15:33.842013 containerd[1549]: time="2025-08-13T01:15:33.841980100Z" level=info msg="StopPodSandbox for \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\"" Aug 13 01:15:33.842058 containerd[1549]: time="2025-08-13T01:15:33.842030480Z" level=info msg="Container to stop \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:15:33.848907 systemd[1]: cri-containerd-f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647.scope: Deactivated successfully. Aug 13 01:15:33.850368 containerd[1549]: time="2025-08-13T01:15:33.849700648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" id:\"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" pid:2075 exit_status:137 exited_at:{seconds:1755047733 nanos:849342919}" Aug 13 01:15:33.874162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647-rootfs.mount: Deactivated successfully. Aug 13 01:15:33.876018 containerd[1549]: time="2025-08-13T01:15:33.875969033Z" level=info msg="shim disconnected" id=f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647 namespace=k8s.io Aug 13 01:15:33.876018 containerd[1549]: time="2025-08-13T01:15:33.876000642Z" level=warning msg="cleaning up after shim disconnected" id=f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647 namespace=k8s.io Aug 13 01:15:33.876018 containerd[1549]: time="2025-08-13T01:15:33.876008537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:15:33.889706 containerd[1549]: time="2025-08-13T01:15:33.889258801Z" level=info msg="received exit event sandbox_id:\"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" exit_status:137 exited_at:{seconds:1755047733 nanos:849342919}" Aug 13 01:15:33.890495 containerd[1549]: time="2025-08-13T01:15:33.889960311Z" level=info msg="TearDown network for sandbox \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" successfully" Aug 13 01:15:33.890495 containerd[1549]: time="2025-08-13T01:15:33.889985956Z" level=info msg="StopPodSandbox for \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" returns successfully" Aug 13 01:15:33.891383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647-shm.mount: Deactivated successfully. Aug 13 01:15:33.897720 kubelet[1922]: I0813 01:15:33.897668 1922 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-747864d56d-v7fpp" Aug 13 01:15:33.897720 kubelet[1922]: I0813 01:15:33.897692 1922 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-v7fpp"] Aug 13 01:15:33.928711 kubelet[1922]: I0813 01:15:33.928681 1922 kubelet.go:2351] "Pod admission denied" podUID="738441a0-8a78-40ac-9c10-e64fe3a1d607" pod="tigera-operator/tigera-operator-747864d56d-k9ds5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:33.969662 kubelet[1922]: I0813 01:15:33.969627 1922 kubelet.go:2351] "Pod admission denied" podUID="2e7b8d11-087a-4cbb-a0a0-af3e896409b9" pod="tigera-operator/tigera-operator-747864d56d-kklwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:33.992557 kubelet[1922]: I0813 01:15:33.992507 1922 kubelet.go:2351] "Pod admission denied" podUID="3f6c3388-ddf3-44d8-9c47-663c83852b24" pod="tigera-operator/tigera-operator-747864d56d-cmbhh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:33.997174 kubelet[1922]: I0813 01:15:33.997155 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75bda629-afc2-444d-b018-fc87852383ca-var-lib-calico\") pod \"75bda629-afc2-444d-b018-fc87852383ca\" (UID: \"75bda629-afc2-444d-b018-fc87852383ca\") " Aug 13 01:15:33.997241 kubelet[1922]: I0813 01:15:33.997186 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wsrx\" (UniqueName: \"kubernetes.io/projected/75bda629-afc2-444d-b018-fc87852383ca-kube-api-access-4wsrx\") pod \"75bda629-afc2-444d-b018-fc87852383ca\" (UID: \"75bda629-afc2-444d-b018-fc87852383ca\") " Aug 13 01:15:33.997343 kubelet[1922]: I0813 01:15:33.997324 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75bda629-afc2-444d-b018-fc87852383ca-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "75bda629-afc2-444d-b018-fc87852383ca" (UID: "75bda629-afc2-444d-b018-fc87852383ca"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:15:34.000725 systemd[1]: var-lib-kubelet-pods-75bda629\x2dafc2\x2d444d\x2db018\x2dfc87852383ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4wsrx.mount: Deactivated successfully. Aug 13 01:15:34.001504 kubelet[1922]: I0813 01:15:34.001281 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bda629-afc2-444d-b018-fc87852383ca-kube-api-access-4wsrx" (OuterVolumeSpecName: "kube-api-access-4wsrx") pod "75bda629-afc2-444d-b018-fc87852383ca" (UID: "75bda629-afc2-444d-b018-fc87852383ca"). InnerVolumeSpecName "kube-api-access-4wsrx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:15:34.029359 kubelet[1922]: I0813 01:15:34.029337 1922 kubelet.go:2351] "Pod admission denied" podUID="a11f1e5b-4d39-4752-b6f3-e331efa5c625" pod="tigera-operator/tigera-operator-747864d56d-5k2lr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.053509 kubelet[1922]: I0813 01:15:34.053480 1922 kubelet.go:2351] "Pod admission denied" podUID="2c152e9d-34b9-4b95-b6c9-4c9f8593b312" pod="tigera-operator/tigera-operator-747864d56d-n2cvs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.086175 kubelet[1922]: I0813 01:15:34.086148 1922 kubelet.go:2351] "Pod admission denied" podUID="42027efc-eb6c-4a3a-aa83-a32eb48147e8" pod="tigera-operator/tigera-operator-747864d56d-tkbqv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.091969 kubelet[1922]: I0813 01:15:34.091942 1922 status_manager.go:890] "Failed to get status for pod" podUID="42027efc-eb6c-4a3a-aa83-a32eb48147e8" pod="tigera-operator/tigera-operator-747864d56d-tkbqv" err="pods \"tigera-operator-747864d56d-tkbqv\" is forbidden: User \"system:node:192.168.178.99\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '192.168.178.99' and this object" Aug 13 01:15:34.097817 kubelet[1922]: I0813 01:15:34.097795 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wsrx\" (UniqueName: \"kubernetes.io/projected/75bda629-afc2-444d-b018-fc87852383ca-kube-api-access-4wsrx\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:34.097817 kubelet[1922]: I0813 01:15:34.097814 1922 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75bda629-afc2-444d-b018-fc87852383ca-var-lib-calico\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:34.314856 kubelet[1922]: E0813 01:15:34.314787 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:34.497980 kubelet[1922]: I0813 01:15:34.497947 1922 scope.go:117] "RemoveContainer" containerID="b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796" Aug 13 01:15:34.499843 containerd[1549]: time="2025-08-13T01:15:34.499762218Z" level=info msg="RemoveContainer for \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\"" Aug 13 01:15:34.502964 containerd[1549]: time="2025-08-13T01:15:34.502927935Z" level=info msg="RemoveContainer for \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" returns successfully" Aug 13 01:15:34.503208 kubelet[1922]: I0813 01:15:34.503137 1922 scope.go:117] "RemoveContainer" containerID="b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796" Aug 13 01:15:34.503499 systemd[1]: Removed slice kubepods-besteffort-pod75bda629_afc2_444d_b018_fc87852383ca.slice - libcontainer container kubepods-besteffort-pod75bda629_afc2_444d_b018_fc87852383ca.slice. Aug 13 01:15:34.503602 systemd[1]: kubepods-besteffort-pod75bda629_afc2_444d_b018_fc87852383ca.slice: Consumed 796ms CPU time, 68.4M memory peak. Aug 13 01:15:34.504157 containerd[1549]: time="2025-08-13T01:15:34.503875574Z" level=error msg="ContainerStatus for \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\": not found" Aug 13 01:15:34.504218 kubelet[1922]: E0813 01:15:34.503985 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\": not found" containerID="b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796" Aug 13 01:15:34.504218 kubelet[1922]: I0813 01:15:34.504005 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796"} err="failed to get container status \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0484214865e44a1aac016f9e9a381eb840b4c9d487ca581b70e0fbe2f787796\": not found" Aug 13 01:15:34.527803 kubelet[1922]: I0813 01:15:34.527747 1922 kubelet.go:2351] "Pod admission denied" podUID="50f62e11-d4dd-4ca9-89f3-2bdbe2d277c7" pod="tigera-operator/tigera-operator-747864d56d-777zt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.550161 kubelet[1922]: I0813 01:15:34.550010 1922 kubelet.go:2351] "Pod admission denied" podUID="038edb20-ea36-44ac-95e1-5f12787a07c0" pod="tigera-operator/tigera-operator-747864d56d-bv299" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.595818 kubelet[1922]: I0813 01:15:34.595142 1922 kubelet.go:2351] "Pod admission denied" podUID="4f44608f-0be0-4801-9b21-33651d60ffb2" pod="tigera-operator/tigera-operator-747864d56d-hltqd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.618994 kubelet[1922]: I0813 01:15:34.618947 1922 kubelet.go:2351] "Pod admission denied" podUID="14cc48d2-853c-498a-ba2d-4c1a7ab0642e" pod="tigera-operator/tigera-operator-747864d56d-9r6wz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.642138 kubelet[1922]: I0813 01:15:34.642102 1922 kubelet.go:2351] "Pod admission denied" podUID="9dd96ff2-cddb-4514-b566-19e03dc51526" pod="tigera-operator/tigera-operator-747864d56d-rmsvz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.670857 kubelet[1922]: I0813 01:15:34.670808 1922 kubelet.go:2351] "Pod admission denied" podUID="1f054dba-1d8d-4f81-af54-cd9e9c7a3d99" pod="tigera-operator/tigera-operator-747864d56d-fmrzn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.692256 kubelet[1922]: I0813 01:15:34.692108 1922 kubelet.go:2351] "Pod admission denied" podUID="d77be1eb-2505-4f52-9185-91c2e6ba3ea6" pod="tigera-operator/tigera-operator-747864d56d-ft57b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.718829 kubelet[1922]: I0813 01:15:34.718777 1922 kubelet.go:2351] "Pod admission denied" podUID="e447d94f-9cd1-427b-8461-af74d9709155" pod="tigera-operator/tigera-operator-747864d56d-2v5g4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.743155 kubelet[1922]: I0813 01:15:34.743129 1922 kubelet.go:2351] "Pod admission denied" podUID="48bf809e-f158-4e6c-8f16-ea64cded3dac" pod="tigera-operator/tigera-operator-747864d56d-rn2zh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.767045 kubelet[1922]: I0813 01:15:34.766903 1922 kubelet.go:2351] "Pod admission denied" podUID="594ffad9-79a8-4b83-8822-f88746f5bc95" pod="tigera-operator/tigera-operator-747864d56d-ghzjk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.876538 kubelet[1922]: I0813 01:15:34.876151 1922 kubelet.go:2351] "Pod admission denied" podUID="c7dd3a8a-5697-4ae3-bbe2-572423e680e1" pod="tigera-operator/tigera-operator-747864d56d-znx66" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:34.898803 kubelet[1922]: I0813 01:15:34.898774 1922 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-v7fpp"] Aug 13 01:15:34.909586 kubelet[1922]: I0813 01:15:34.909444 1922 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:34.909586 kubelet[1922]: I0813 01:15:34.909476 1922 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:34.910604 containerd[1549]: time="2025-08-13T01:15:34.910546975Z" level=info msg="StopPodSandbox for \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\"" Aug 13 01:15:34.911004 containerd[1549]: time="2025-08-13T01:15:34.910979587Z" level=info msg="TearDown network for sandbox \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" successfully" Aug 13 01:15:34.911004 containerd[1549]: time="2025-08-13T01:15:34.911000879Z" level=info msg="StopPodSandbox for \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" returns successfully" Aug 13 01:15:34.911323 containerd[1549]: time="2025-08-13T01:15:34.911305506Z" level=info msg="RemovePodSandbox for \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\"" Aug 13 01:15:34.911404 containerd[1549]: time="2025-08-13T01:15:34.911378689Z" level=info msg="Forcibly stopping sandbox \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\"" Aug 13 01:15:34.911451 containerd[1549]: time="2025-08-13T01:15:34.911443086Z" level=info msg="TearDown network for sandbox \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" successfully" Aug 13 01:15:34.912439 containerd[1549]: time="2025-08-13T01:15:34.912411368Z" level=info msg="Ensure that sandbox f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647 in task-service has been cleanup successfully" Aug 13 01:15:34.914896 containerd[1549]: time="2025-08-13T01:15:34.914856815Z" level=info msg="RemovePodSandbox \"f6592450c12d634fd0c654f4e892c3bf4329cefc1758ea70b600ac7fd1127647\" returns successfully" Aug 13 01:15:34.915186 kubelet[1922]: I0813 01:15:34.915173 1922 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:34.922559 kubelet[1922]: I0813 01:15:34.922540 1922 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:34.922630 kubelet[1922]: I0813 01:15:34.922616 1922 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-768f4c5c69-mc8t4","calico-system/calico-node-rfw6k","kube-system/kube-proxy-kdpw8","calico-system/csi-node-driver-qpdzq"] Aug 13 01:15:34.922724 kubelet[1922]: I0813 01:15:34.922711 1922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:15:34.923205 containerd[1549]: time="2025-08-13T01:15:34.923186787Z" level=info msg="StopContainer for \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" with timeout 2 (s)" Aug 13 01:15:34.923483 containerd[1549]: time="2025-08-13T01:15:34.923462948Z" level=info msg="Stop container \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" with signal terminated" Aug 13 01:15:34.933552 systemd[1]: cri-containerd-339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11.scope: Deactivated successfully. Aug 13 01:15:34.935335 containerd[1549]: time="2025-08-13T01:15:34.935308868Z" level=info msg="received exit event container_id:\"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" id:\"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" pid:3262 exit_status:2 exited_at:{seconds:1755047734 nanos:934453592}" Aug 13 01:15:34.935466 containerd[1549]: time="2025-08-13T01:15:34.935445828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" id:\"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" pid:3262 exit_status:2 exited_at:{seconds:1755047734 nanos:934453592}" Aug 13 01:15:34.958150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11-rootfs.mount: Deactivated successfully. Aug 13 01:15:34.967664 containerd[1549]: time="2025-08-13T01:15:34.967630694Z" level=info msg="StopContainer for \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" returns successfully" Aug 13 01:15:34.968250 containerd[1549]: time="2025-08-13T01:15:34.968226340Z" level=info msg="StopPodSandbox for \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\"" Aug 13 01:15:34.968289 containerd[1549]: time="2025-08-13T01:15:34.968275939Z" level=info msg="Container to stop \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:15:34.975093 systemd[1]: cri-containerd-dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7.scope: Deactivated successfully. Aug 13 01:15:34.977030 containerd[1549]: time="2025-08-13T01:15:34.977004261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" id:\"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" pid:3145 exit_status:137 exited_at:{seconds:1755047734 nanos:976698794}" Aug 13 01:15:34.977491 kubelet[1922]: I0813 01:15:34.977443 1922 kubelet.go:2351] "Pod admission denied" podUID="f4a96b09-a57b-452a-8880-6e370bfc0eee" pod="tigera-operator/tigera-operator-747864d56d-kdck5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:35.000121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7-rootfs.mount: Deactivated successfully. Aug 13 01:15:35.002985 containerd[1549]: time="2025-08-13T01:15:35.002945015Z" level=info msg="shim disconnected" id=dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7 namespace=k8s.io Aug 13 01:15:35.003271 containerd[1549]: time="2025-08-13T01:15:35.003247542Z" level=warning msg="cleaning up after shim disconnected" id=dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7 namespace=k8s.io Aug 13 01:15:35.003327 containerd[1549]: time="2025-08-13T01:15:35.003262700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:15:35.017152 containerd[1549]: time="2025-08-13T01:15:35.015336897Z" level=info msg="received exit event sandbox_id:\"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" exit_status:137 exited_at:{seconds:1755047734 nanos:976698794}" Aug 13 01:15:35.017152 containerd[1549]: time="2025-08-13T01:15:35.015377550Z" level=error msg="Failed to handle event container_id:\"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" id:\"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" pid:3145 exit_status:137 exited_at:{seconds:1755047734 nanos:976698794} for dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Aug 13 01:15:35.016983 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7-shm.mount: Deactivated successfully. Aug 13 01:15:35.055334 systemd-networkd[1460]: cali46bb075125b: Link DOWN Aug 13 01:15:35.055417 systemd-networkd[1460]: cali46bb075125b: Lost carrier Aug 13 01:15:35.078244 kubelet[1922]: I0813 01:15:35.078180 1922 kubelet.go:2351] "Pod admission denied" podUID="9c78eaff-f7e2-4b19-bc51-12b23a7412c7" pod="tigera-operator/tigera-operator-747864d56d-fgrpw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.053 [INFO][3834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.053 [INFO][3834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" iface="eth0" netns="/var/run/netns/cni-2563b99c-20fe-ec49-aeb8-14f7b3cb7ca0" Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.054 [INFO][3834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" iface="eth0" netns="/var/run/netns/cni-2563b99c-20fe-ec49-aeb8-14f7b3cb7ca0" Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.069 [INFO][3834] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" after=15.140162ms iface="eth0" netns="/var/run/netns/cni-2563b99c-20fe-ec49-aeb8-14f7b3cb7ca0" Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.069 [INFO][3834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.069 [INFO][3834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.087 [INFO][3842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.087 [INFO][3842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:35.121272 containerd[1549]: 2025-08-13 01:15:35.088 [INFO][3842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:35.121546 containerd[1549]: 2025-08-13 01:15:35.115 [INFO][3842] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:35.121546 containerd[1549]: 2025-08-13 01:15:35.115 [INFO][3842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:35.121546 containerd[1549]: 2025-08-13 01:15:35.117 [INFO][3842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:35.121546 containerd[1549]: 2025-08-13 01:15:35.119 [INFO][3834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:35.123309 containerd[1549]: time="2025-08-13T01:15:35.123251564Z" level=info msg="TearDown network for sandbox \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" successfully" Aug 13 01:15:35.123309 containerd[1549]: time="2025-08-13T01:15:35.123294038Z" level=info msg="StopPodSandbox for \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" returns successfully" Aug 13 01:15:35.123802 systemd[1]: run-netns-cni\x2d2563b99c\x2d20fe\x2dec49\x2daeb8\x2d14f7b3cb7ca0.mount: Deactivated successfully. Aug 13 01:15:35.125453 kubelet[1922]: I0813 01:15:35.125053 1922 kubelet.go:2351] "Pod admission denied" podUID="42979959-5027-4dbb-bc6d-d16c497b13cd" pod="tigera-operator/tigera-operator-747864d56d-sjkkr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:35.131500 kubelet[1922]: I0813 01:15:35.130739 1922 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-768f4c5c69-mc8t4" Aug 13 01:15:35.131500 kubelet[1922]: I0813 01:15:35.130757 1922 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-768f4c5c69-mc8t4"] Aug 13 01:15:35.204389 kubelet[1922]: I0813 01:15:35.204371 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9ptn\" (UniqueName: \"kubernetes.io/projected/40a60688-4101-413f-9647-7f5f3b8b0a05-kube-api-access-k9ptn\") pod \"40a60688-4101-413f-9647-7f5f3b8b0a05\" (UID: \"40a60688-4101-413f-9647-7f5f3b8b0a05\") " Aug 13 01:15:35.204757 kubelet[1922]: I0813 01:15:35.204447 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a60688-4101-413f-9647-7f5f3b8b0a05-config\") pod \"40a60688-4101-413f-9647-7f5f3b8b0a05\" (UID: \"40a60688-4101-413f-9647-7f5f3b8b0a05\") " Aug 13 01:15:35.204757 kubelet[1922]: I0813 01:15:35.204471 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40a60688-4101-413f-9647-7f5f3b8b0a05-goldmane-ca-bundle\") pod \"40a60688-4101-413f-9647-7f5f3b8b0a05\" (UID: \"40a60688-4101-413f-9647-7f5f3b8b0a05\") " Aug 13 01:15:35.204757 kubelet[1922]: I0813 01:15:35.204488 1922 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/40a60688-4101-413f-9647-7f5f3b8b0a05-goldmane-key-pair\") pod \"40a60688-4101-413f-9647-7f5f3b8b0a05\" (UID: \"40a60688-4101-413f-9647-7f5f3b8b0a05\") " Aug 13 01:15:35.205895 kubelet[1922]: I0813 01:15:35.205871 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a60688-4101-413f-9647-7f5f3b8b0a05-config" (OuterVolumeSpecName: "config") pod "40a60688-4101-413f-9647-7f5f3b8b0a05" (UID: "40a60688-4101-413f-9647-7f5f3b8b0a05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:15:35.206342 kubelet[1922]: I0813 01:15:35.206142 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a60688-4101-413f-9647-7f5f3b8b0a05-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "40a60688-4101-413f-9647-7f5f3b8b0a05" (UID: "40a60688-4101-413f-9647-7f5f3b8b0a05"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:15:35.209589 kubelet[1922]: I0813 01:15:35.208219 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a60688-4101-413f-9647-7f5f3b8b0a05-kube-api-access-k9ptn" (OuterVolumeSpecName: "kube-api-access-k9ptn") pod "40a60688-4101-413f-9647-7f5f3b8b0a05" (UID: "40a60688-4101-413f-9647-7f5f3b8b0a05"). InnerVolumeSpecName "kube-api-access-k9ptn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:15:35.208946 systemd[1]: var-lib-kubelet-pods-40a60688\x2d4101\x2d413f\x2d9647\x2d7f5f3b8b0a05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9ptn.mount: Deactivated successfully. Aug 13 01:15:35.209048 systemd[1]: var-lib-kubelet-pods-40a60688\x2d4101\x2d413f\x2d9647\x2d7f5f3b8b0a05-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:15:35.213774 kubelet[1922]: I0813 01:15:35.213745 1922 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a60688-4101-413f-9647-7f5f3b8b0a05-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "40a60688-4101-413f-9647-7f5f3b8b0a05" (UID: "40a60688-4101-413f-9647-7f5f3b8b0a05"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:15:35.277539 kubelet[1922]: I0813 01:15:35.277510 1922 kubelet.go:2351] "Pod admission denied" podUID="ddf29eb3-863e-449d-9f18-0806175f5e57" pod="tigera-operator/tigera-operator-747864d56d-vjpvr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:35.280801 kubelet[1922]: I0813 01:15:35.280782 1922 status_manager.go:890] "Failed to get status for pod" podUID="ddf29eb3-863e-449d-9f18-0806175f5e57" pod="tigera-operator/tigera-operator-747864d56d-vjpvr" err="pods \"tigera-operator-747864d56d-vjpvr\" is forbidden: User \"system:node:192.168.178.99\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '192.168.178.99' and this object" Aug 13 01:15:35.305056 kubelet[1922]: I0813 01:15:35.305029 1922 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k9ptn\" (UniqueName: \"kubernetes.io/projected/40a60688-4101-413f-9647-7f5f3b8b0a05-kube-api-access-k9ptn\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:35.305056 kubelet[1922]: I0813 01:15:35.305046 1922 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a60688-4101-413f-9647-7f5f3b8b0a05-config\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:35.305056 kubelet[1922]: I0813 01:15:35.305056 1922 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40a60688-4101-413f-9647-7f5f3b8b0a05-goldmane-ca-bundle\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:35.305152 kubelet[1922]: I0813 01:15:35.305064 1922 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/40a60688-4101-413f-9647-7f5f3b8b0a05-goldmane-key-pair\") on node \"192.168.178.99\" DevicePath \"\"" Aug 13 01:15:35.315269 kubelet[1922]: E0813 01:15:35.315250 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:35.389327 systemd[1]: Removed slice kubepods-besteffort-pod40a60688_4101_413f_9647_7f5f3b8b0a05.slice - libcontainer container kubepods-besteffort-pod40a60688_4101_413f_9647_7f5f3b8b0a05.slice. Aug 13 01:15:35.502103 kubelet[1922]: I0813 01:15:35.501753 1922 scope.go:117] "RemoveContainer" containerID="339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11" Aug 13 01:15:35.505612 containerd[1549]: time="2025-08-13T01:15:35.504699135Z" level=info msg="RemoveContainer for \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\"" Aug 13 01:15:35.508481 containerd[1549]: time="2025-08-13T01:15:35.508450666Z" level=info msg="RemoveContainer for \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" returns successfully" Aug 13 01:15:35.508943 kubelet[1922]: I0813 01:15:35.508716 1922 scope.go:117] "RemoveContainer" containerID="339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11" Aug 13 01:15:35.509425 containerd[1549]: time="2025-08-13T01:15:35.509348260Z" level=error msg="ContainerStatus for \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\": not found" Aug 13 01:15:35.509698 kubelet[1922]: E0813 01:15:35.509658 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\": not found" containerID="339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11" Aug 13 01:15:35.509698 kubelet[1922]: I0813 01:15:35.509684 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11"} err="failed to get container status \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\": rpc error: code = NotFound desc = an error occurred when try to find container \"339c439ccc846c8bbd0c7fc2079e052c8dc9f77c352f5555ed914ac43cd06c11\": not found" Aug 13 01:15:35.523900 kubelet[1922]: I0813 01:15:35.523868 1922 kubelet.go:2351] "Pod admission denied" podUID="e058daf6-efcb-462e-9b8e-f72d9ea3c7d8" pod="tigera-operator/tigera-operator-747864d56d-7w4xg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:35.624953 kubelet[1922]: I0813 01:15:35.624905 1922 kubelet.go:2351] "Pod admission denied" podUID="51115050-7b16-416d-aea2-97e871453b62" pod="tigera-operator/tigera-operator-747864d56d-gx9tv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:35.825729 kubelet[1922]: I0813 01:15:35.825688 1922 kubelet.go:2351] "Pod admission denied" podUID="e44ceffe-ab07-4681-9b90-a2f649ef5ed8" pod="tigera-operator/tigera-operator-747864d56d-75xmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:35.925792 kubelet[1922]: I0813 01:15:35.925764 1922 kubelet.go:2351] "Pod admission denied" podUID="ec6baf75-9753-4729-9038-b917a00cc6e4" pod="tigera-operator/tigera-operator-747864d56d-wlr2d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.026760 kubelet[1922]: I0813 01:15:36.026734 1922 kubelet.go:2351] "Pod admission denied" podUID="82acf1df-9495-49d3-86c7-8dacf5387007" pod="tigera-operator/tigera-operator-747864d56d-jqfmv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.125365 kubelet[1922]: I0813 01:15:36.125268 1922 kubelet.go:2351] "Pod admission denied" podUID="b4a4c367-0575-447d-8e41-587fc04f0d6b" pod="tigera-operator/tigera-operator-747864d56d-cstxq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.131215 kubelet[1922]: I0813 01:15:36.131182 1922 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-768f4c5c69-mc8t4"] Aug 13 01:15:36.139660 kubelet[1922]: I0813 01:15:36.139639 1922 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:36.139660 kubelet[1922]: I0813 01:15:36.139668 1922 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:36.140924 containerd[1549]: time="2025-08-13T01:15:36.140901079Z" level=info msg="StopPodSandbox for \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\"" Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.181 [INFO][3865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.181 [INFO][3865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" iface="eth0" netns="" Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.181 [INFO][3865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.181 [INFO][3865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.199 [INFO][3872] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.199 [INFO][3872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.199 [INFO][3872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.205 [WARNING][3872] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.205 [INFO][3872] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:36.210163 containerd[1549]: 2025-08-13 01:15:36.206 [INFO][3872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:36.214095 containerd[1549]: 2025-08-13 01:15:36.208 [INFO][3865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:36.214095 containerd[1549]: time="2025-08-13T01:15:36.210197398Z" level=info msg="TearDown network for sandbox \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" successfully" Aug 13 01:15:36.214095 containerd[1549]: time="2025-08-13T01:15:36.210235287Z" level=info msg="StopPodSandbox for \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" returns successfully" Aug 13 01:15:36.214095 containerd[1549]: time="2025-08-13T01:15:36.210734268Z" level=info msg="RemovePodSandbox for \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\"" Aug 13 01:15:36.214095 containerd[1549]: time="2025-08-13T01:15:36.210759971Z" level=info msg="Forcibly stopping sandbox \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\"" Aug 13 01:15:36.233457 kubelet[1922]: I0813 01:15:36.233407 1922 kubelet.go:2351] "Pod admission denied" podUID="f3ac7433-1cda-4c5a-9c75-fbc1c41d564b" pod="tigera-operator/tigera-operator-747864d56d-87hc4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.275345 containerd[1549]: time="2025-08-13T01:15:36.275293177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" id:\"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" pid:3145 exit_status:137 exited_at:{seconds:1755047734 nanos:976698794}" Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.248 [INFO][3886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.248 [INFO][3886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" iface="eth0" netns="" Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.248 [INFO][3886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.248 [INFO][3886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.265 [INFO][3893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.265 [INFO][3893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.265 [INFO][3893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.272 [WARNING][3893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.272 [INFO][3893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" HandleID="k8s-pod-network.dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Workload="192.168.178.99-k8s-goldmane--768f4c5c69--mc8t4-eth0" Aug 13 01:15:36.278260 containerd[1549]: 2025-08-13 01:15:36.274 [INFO][3893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:36.278613 containerd[1549]: 2025-08-13 01:15:36.276 [INFO][3886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7" Aug 13 01:15:36.278613 containerd[1549]: time="2025-08-13T01:15:36.278299654Z" level=info msg="TearDown network for sandbox \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" successfully" Aug 13 01:15:36.279388 containerd[1549]: time="2025-08-13T01:15:36.279371593Z" level=info msg="Ensure that sandbox dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7 in task-service has been cleanup successfully" Aug 13 01:15:36.282207 containerd[1549]: time="2025-08-13T01:15:36.282188602Z" level=info msg="RemovePodSandbox \"dfa06159ce9f10193ad8cab07f002b41d2e529a197b86c64d71617ed3cf1fed7\" returns successfully" Aug 13 01:15:36.282751 kubelet[1922]: I0813 01:15:36.282731 1922 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:36.296317 kubelet[1922]: I0813 01:15:36.296290 1922 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:36.296383 kubelet[1922]: I0813 01:15:36.296338 1922 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-node-rfw6k","kube-system/kube-proxy-kdpw8","calico-system/csi-node-driver-qpdzq"] Aug 13 01:15:36.296383 kubelet[1922]: E0813 01:15:36.296365 1922 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-rfw6k" Aug 13 01:15:36.296383 kubelet[1922]: E0813 01:15:36.296374 1922 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kdpw8" Aug 13 01:15:36.296383 kubelet[1922]: E0813 01:15:36.296383 1922 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-qpdzq" Aug 13 01:15:36.296478 kubelet[1922]: I0813 01:15:36.296392 1922 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:15:36.315950 kubelet[1922]: E0813 01:15:36.315927 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:36.326918 kubelet[1922]: I0813 01:15:36.326895 1922 kubelet.go:2351] "Pod admission denied" podUID="0a78b914-d06a-4074-927b-c30635dd0832" pod="tigera-operator/tigera-operator-747864d56d-bcklm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.426638 kubelet[1922]: I0813 01:15:36.426591 1922 kubelet.go:2351] "Pod admission denied" podUID="d7315851-b4d9-435b-84ae-153793909839" pod="tigera-operator/tigera-operator-747864d56d-6n969" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.627232 kubelet[1922]: I0813 01:15:36.627200 1922 kubelet.go:2351] "Pod admission denied" podUID="41b636c4-5f66-402f-8b59-906ea88f710b" pod="tigera-operator/tigera-operator-747864d56d-zghh7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.729501 kubelet[1922]: I0813 01:15:36.728932 1922 kubelet.go:2351] "Pod admission denied" podUID="b89f2c81-18b1-4f54-b893-cabf2641d230" pod="tigera-operator/tigera-operator-747864d56d-xnjdv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.829181 kubelet[1922]: I0813 01:15:36.829114 1922 kubelet.go:2351] "Pod admission denied" podUID="969e1f4c-9641-4090-8c56-e6137858c8c3" pod="tigera-operator/tigera-operator-747864d56d-f4tb8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:36.925485 kubelet[1922]: I0813 01:15:36.925432 1922 kubelet.go:2351] "Pod admission denied" podUID="3d318392-d6a4-4ea9-8db3-44ab373d8593" pod="tigera-operator/tigera-operator-747864d56d-lz6kn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.026016 kubelet[1922]: I0813 01:15:37.025595 1922 kubelet.go:2351] "Pod admission denied" podUID="f04b5787-0869-4da9-8e51-b5cb509b430b" pod="tigera-operator/tigera-operator-747864d56d-twbnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.127937 kubelet[1922]: I0813 01:15:37.127889 1922 kubelet.go:2351] "Pod admission denied" podUID="6733686a-759c-415d-9c53-5e148a4bf00e" pod="tigera-operator/tigera-operator-747864d56d-9r9qn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.227868 kubelet[1922]: I0813 01:15:37.227830 1922 kubelet.go:2351] "Pod admission denied" podUID="60944bf0-e68b-47a5-bb38-008ab4ef341b" pod="tigera-operator/tigera-operator-747864d56d-lh7g4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.316138 kubelet[1922]: E0813 01:15:37.316040 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:37.325666 kubelet[1922]: I0813 01:15:37.325640 1922 kubelet.go:2351] "Pod admission denied" podUID="df6c0906-bb75-4d9d-ab77-d0ba6402d266" pod="tigera-operator/tigera-operator-747864d56d-qnd2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.430441 kubelet[1922]: I0813 01:15:37.430373 1922 kubelet.go:2351] "Pod admission denied" podUID="cf873512-68c2-460f-bcbf-c94eb47d2af6" pod="tigera-operator/tigera-operator-747864d56d-svsgl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.528132 kubelet[1922]: I0813 01:15:37.527913 1922 kubelet.go:2351] "Pod admission denied" podUID="99febe52-ee0c-4f9b-b1cf-6dcd5a871ff3" pod="tigera-operator/tigera-operator-747864d56d-f8psv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.630676 kubelet[1922]: I0813 01:15:37.630506 1922 kubelet.go:2351] "Pod admission denied" podUID="023deefa-12c1-4317-a894-14e5c13cda78" pod="tigera-operator/tigera-operator-747864d56d-2vxxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.730916 kubelet[1922]: I0813 01:15:37.730857 1922 kubelet.go:2351] "Pod admission denied" podUID="4c27cb64-3e57-42d0-9de8-c3daf4236c80" pod="tigera-operator/tigera-operator-747864d56d-92dwj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.776544 kubelet[1922]: I0813 01:15:37.776492 1922 kubelet.go:2351] "Pod admission denied" podUID="7cd5aa2d-8191-4f5f-8906-cde6023af211" pod="tigera-operator/tigera-operator-747864d56d-v8vs4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.878813 kubelet[1922]: I0813 01:15:37.878779 1922 kubelet.go:2351] "Pod admission denied" podUID="ad7ae7ee-cb1a-46fc-b441-eda0afa18eec" pod="tigera-operator/tigera-operator-747864d56d-brmpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:37.977811 kubelet[1922]: I0813 01:15:37.977758 1922 kubelet.go:2351] "Pod admission denied" podUID="d3eae47b-5502-441c-b0b1-0c114c105489" pod="tigera-operator/tigera-operator-747864d56d-p9fbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.076459 kubelet[1922]: I0813 01:15:38.076398 1922 kubelet.go:2351] "Pod admission denied" podUID="ea2581bd-833f-4003-b8d4-77bd3c792ddf" pod="tigera-operator/tigera-operator-747864d56d-qjv49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.183983 kubelet[1922]: I0813 01:15:38.183945 1922 kubelet.go:2351] "Pod admission denied" podUID="8e4873ca-b8db-4b8c-94ac-365aad0333da" pod="tigera-operator/tigera-operator-747864d56d-25f7z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.278005 kubelet[1922]: I0813 01:15:38.277557 1922 kubelet.go:2351] "Pod admission denied" podUID="4bd7a360-dfb7-40aa-8e65-cd784a84f574" pod="tigera-operator/tigera-operator-747864d56d-lphf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.316680 kubelet[1922]: E0813 01:15:38.316613 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:38.377126 kubelet[1922]: I0813 01:15:38.377082 1922 kubelet.go:2351] "Pod admission denied" podUID="0d2ef78e-822d-4ea2-906a-bba20cf037e4" pod="tigera-operator/tigera-operator-747864d56d-p4hlm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.477654 kubelet[1922]: I0813 01:15:38.477330 1922 kubelet.go:2351] "Pod admission denied" podUID="faf625ab-4cd2-41f6-885e-e0396ca22b02" pod="tigera-operator/tigera-operator-747864d56d-2nq7c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.586897 kubelet[1922]: I0813 01:15:38.586515 1922 kubelet.go:2351] "Pod admission denied" podUID="2e906e19-83e9-4461-8ab8-e476ceae62aa" pod="tigera-operator/tigera-operator-747864d56d-7fngb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.678266 kubelet[1922]: I0813 01:15:38.678197 1922 kubelet.go:2351] "Pod admission denied" podUID="5cf1f09c-a4d9-43e2-bcdc-243bb4dab66e" pod="tigera-operator/tigera-operator-747864d56d-nhqld" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.785073 update_engine[1532]: I20250813 01:15:38.784987 1532 update_attempter.cc:509] Updating boot flags... Aug 13 01:15:38.888108 kubelet[1922]: I0813 01:15:38.887778 1922 kubelet.go:2351] "Pod admission denied" podUID="5a0fc6a2-6c87-4156-ac60-6ff3e2c4a40b" pod="tigera-operator/tigera-operator-747864d56d-tzgcd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:38.990083 kubelet[1922]: I0813 01:15:38.990059 1922 kubelet.go:2351] "Pod admission denied" podUID="0eb6be6f-bc53-4906-b16c-97465c09e064" pod="tigera-operator/tigera-operator-747864d56d-9cdpd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.028935 kubelet[1922]: I0813 01:15:39.027293 1922 kubelet.go:2351] "Pod admission denied" podUID="526dabac-ddbd-40e3-a8b1-e35edf68bf35" pod="tigera-operator/tigera-operator-747864d56d-vj8dh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.129346 kubelet[1922]: I0813 01:15:39.129311 1922 kubelet.go:2351] "Pod admission denied" podUID="1e4a5142-46c4-4f16-b38d-50a6c2d0c5fa" pod="tigera-operator/tigera-operator-747864d56d-x92zs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.232732 kubelet[1922]: I0813 01:15:39.232699 1922 kubelet.go:2351] "Pod admission denied" podUID="dab076cc-11a5-4d3e-a6cb-f85f93b0f50f" pod="tigera-operator/tigera-operator-747864d56d-cjprs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.279360 kubelet[1922]: I0813 01:15:39.279330 1922 kubelet.go:2351] "Pod admission denied" podUID="fd540498-65ac-4688-82ae-5e5fe6c19dee" pod="tigera-operator/tigera-operator-747864d56d-zch76" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.317436 kubelet[1922]: E0813 01:15:39.317411 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:15:39.386935 kubelet[1922]: I0813 01:15:39.386863 1922 kubelet.go:2351] "Pod admission denied" podUID="d56f4bdd-8f4c-4ae2-8d13-4b199c8e6fa8" pod="tigera-operator/tigera-operator-747864d56d-2pvtx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.478060 kubelet[1922]: I0813 01:15:39.477936 1922 kubelet.go:2351] "Pod admission denied" podUID="d3978424-59df-44ac-ac32-fa90d0b54bbf" pod="tigera-operator/tigera-operator-747864d56d-jsbvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.579745 kubelet[1922]: I0813 01:15:39.579436 1922 kubelet.go:2351] "Pod admission denied" podUID="2e68423a-2e9d-44ea-aba5-97ca2faf7627" pod="tigera-operator/tigera-operator-747864d56d-nlhqq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.703431 kubelet[1922]: I0813 01:15:39.703390 1922 kubelet.go:2351] "Pod admission denied" podUID="0b73ca74-0f8a-43f3-8078-d609a591c9a0" pod="tigera-operator/tigera-operator-747864d56d-s9tjm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:39.709041 kubelet[1922]: I0813 01:15:39.708986 1922 status_manager.go:890] "Failed to get status for pod" podUID="0b73ca74-0f8a-43f3-8078-d609a591c9a0" pod="tigera-operator/tigera-operator-747864d56d-s9tjm" err="pods \"tigera-operator-747864d56d-s9tjm\" is forbidden: User \"system:node:192.168.178.99\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '192.168.178.99' and this object"