Aug 13 01:07:10.926199 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 01:07:10.926219 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:07:10.926228 kernel: BIOS-provided physical RAM map: Aug 13 01:07:10.926234 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:07:10.926240 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:07:10.926248 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:07:10.926255 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:07:10.926261 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:07:10.926266 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:07:10.926272 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:07:10.926278 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:07:10.926284 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:07:10.926290 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:07:10.926296 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:07:10.926305 kernel: NX (Execute Disable) protection: active Aug 13 01:07:10.926312 kernel: APIC: Static calls initialized Aug 13 01:07:10.926318 kernel: SMBIOS 2.8 present. Aug 13 01:07:10.926324 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:07:10.926330 kernel: Hypervisor detected: KVM Aug 13 01:07:10.926339 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:07:10.926345 kernel: kvm-clock: using sched offset of 4678496003 cycles Aug 13 01:07:10.926351 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:07:10.926358 kernel: tsc: Detected 1999.999 MHz processor Aug 13 01:07:10.926365 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:07:10.926371 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:07:10.926378 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:07:10.926384 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:07:10.926391 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:07:10.926399 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:07:10.926406 kernel: Using GB pages for direct mapping Aug 13 01:07:10.926412 kernel: ACPI: Early table checksum verification disabled Aug 13 01:07:10.926418 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:07:10.926425 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:07:10.926431 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:07:10.926437 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:07:10.926444 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:07:10.926450 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:07:10.926459 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:07:10.926465 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:07:10.926472 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:07:10.926481 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:07:10.926488 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:07:10.926495 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:07:10.926504 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:07:10.926510 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:07:10.926517 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:07:10.926524 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:07:10.926530 kernel: No NUMA configuration found Aug 13 01:07:10.926537 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:07:10.926544 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Aug 13 01:07:10.926550 kernel: Zone ranges: Aug 13 01:07:10.926557 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:07:10.926566 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:07:10.926573 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:07:10.926579 kernel: Movable zone start for each node Aug 13 01:07:10.926586 kernel: Early memory node ranges Aug 13 01:07:10.926592 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:07:10.926599 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:07:10.926605 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:07:10.926612 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:07:10.926619 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:07:10.926627 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:07:10.926634 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:07:10.926641 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:07:10.926647 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:07:10.926654 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:07:10.926661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:07:10.926667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:07:10.926674 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:07:10.926680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:07:10.926689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:07:10.926696 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:07:10.926731 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:07:10.926738 kernel: TSC deadline timer available Aug 13 01:07:10.926744 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 01:07:10.926751 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:07:10.926758 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:07:10.926764 kernel: kvm-guest: setup PV sched yield Aug 13 01:07:10.926771 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:07:10.926781 kernel: Booting paravirtualized kernel on KVM Aug 13 01:07:10.926788 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:07:10.926795 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:07:10.926801 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 01:07:10.926808 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 01:07:10.926814 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:07:10.926821 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:07:10.926828 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:07:10.926835 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:07:10.926845 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:07:10.926852 kernel: random: crng init done Aug 13 01:07:10.926858 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:07:10.926865 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:07:10.926872 kernel: Fallback order for Node 0: 0 Aug 13 01:07:10.926878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Aug 13 01:07:10.926885 kernel: Policy zone: Normal Aug 13 01:07:10.926891 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:07:10.926900 kernel: software IO TLB: area num 2. Aug 13 01:07:10.926907 kernel: Memory: 3964164K/4193772K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 229348K reserved, 0K cma-reserved) Aug 13 01:07:10.926914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:07:10.926921 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 01:07:10.926927 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 01:07:10.926934 kernel: Dynamic Preempt: voluntary Aug 13 01:07:10.926940 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:07:10.926948 kernel: rcu: RCU event tracing is enabled. Aug 13 01:07:10.926955 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:07:10.926964 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:07:10.926971 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:07:10.926977 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:07:10.926984 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:07:10.926991 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:07:10.926997 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:07:10.927004 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:07:10.927011 kernel: Console: colour VGA+ 80x25 Aug 13 01:07:10.927017 kernel: printk: console [tty0] enabled Aug 13 01:07:10.927024 kernel: printk: console [ttyS0] enabled Aug 13 01:07:10.927032 kernel: ACPI: Core revision 20230628 Aug 13 01:07:10.927039 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:07:10.927046 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:07:10.927060 kernel: x2apic enabled Aug 13 01:07:10.927070 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:07:10.927077 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:07:10.927084 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:07:10.927091 kernel: kvm-guest: setup PV IPIs Aug 13 01:07:10.927098 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:07:10.927105 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 01:07:10.927112 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Aug 13 01:07:10.927122 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:07:10.927129 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:07:10.927136 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:07:10.927143 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:07:10.927150 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:07:10.927159 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:07:10.927166 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:07:10.927173 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:07:10.927180 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:07:10.927188 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:07:10.927195 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:07:10.927202 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:07:10.927209 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:07:10.927218 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:07:10.927225 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:07:10.927232 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:07:10.927239 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:07:10.927246 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:07:10.927253 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:07:10.927260 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:07:10.927267 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:07:10.927274 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:07:10.927284 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:07:10.927291 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 01:07:10.927298 kernel: landlock: Up and running. Aug 13 01:07:10.927304 kernel: SELinux: Initializing. Aug 13 01:07:10.927311 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:07:10.927318 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:07:10.927325 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:07:10.927333 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:07:10.927340 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:07:10.927349 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:07:10.927356 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:07:10.927363 kernel: ... version: 0 Aug 13 01:07:10.927370 kernel: ... bit width: 48 Aug 13 01:07:10.927377 kernel: ... generic registers: 6 Aug 13 01:07:10.927383 kernel: ... value mask: 0000ffffffffffff Aug 13 01:07:10.927390 kernel: ... max period: 00007fffffffffff Aug 13 01:07:10.927397 kernel: ... fixed-purpose events: 0 Aug 13 01:07:10.927404 kernel: ... event mask: 000000000000003f Aug 13 01:07:10.927414 kernel: signal: max sigframe size: 3376 Aug 13 01:07:10.927421 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:07:10.927428 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:07:10.927435 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:07:10.927441 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:07:10.927448 kernel: .... node #0, CPUs: #1 Aug 13 01:07:10.927455 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:07:10.927462 kernel: smpboot: Max logical packages: 1 Aug 13 01:07:10.927469 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Aug 13 01:07:10.927478 kernel: devtmpfs: initialized Aug 13 01:07:10.927485 kernel: x86/mm: Memory block size: 128MB Aug 13 01:07:10.927492 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:07:10.927499 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:07:10.927506 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:07:10.927513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:07:10.927520 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:07:10.927527 kernel: audit: type=2000 audit(1755047230.424:1): state=initialized audit_enabled=0 res=1 Aug 13 01:07:10.927534 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:07:10.927543 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:07:10.927550 kernel: cpuidle: using governor menu Aug 13 01:07:10.927557 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:07:10.927564 kernel: dca service started, version 1.12.1 Aug 13 01:07:10.927571 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 01:07:10.927578 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:07:10.927585 kernel: PCI: Using configuration type 1 for base access Aug 13 01:07:10.927592 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:07:10.927599 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:07:10.927609 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:07:10.927616 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:07:10.927622 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:07:10.927629 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:07:10.927636 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:07:10.927643 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:07:10.927650 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:07:10.927657 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 01:07:10.927664 kernel: ACPI: Interpreter enabled Aug 13 01:07:10.927673 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:07:10.927680 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:07:10.927687 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:07:10.927694 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:07:10.928028 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:07:10.928039 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:07:10.928210 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:07:10.928335 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:07:10.928457 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:07:10.928467 kernel: PCI host bridge to bus 0000:00 Aug 13 01:07:10.928585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:07:10.928691 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:07:10.929511 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:07:10.929620 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:07:10.929744 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:07:10.930001 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:07:10.930104 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:07:10.930235 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 01:07:10.930364 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 01:07:10.930480 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 01:07:10.930593 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 01:07:10.930728 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:07:10.930855 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:07:10.932374 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Aug 13 01:07:10.932613 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 01:07:10.933837 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 01:07:10.934088 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:07:10.934256 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 01:07:10.934396 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 01:07:10.934910 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 01:07:10.935065 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:07:10.935188 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:07:10.935316 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 01:07:10.935438 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:07:10.935564 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 01:07:10.935690 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Aug 13 01:07:10.935830 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Aug 13 01:07:10.936147 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 01:07:10.936267 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 01:07:10.936277 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:07:10.936285 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:07:10.936292 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:07:10.936302 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:07:10.936310 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:07:10.936316 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:07:10.936323 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:07:10.936330 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:07:10.936337 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:07:10.936344 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:07:10.936350 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:07:10.936357 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:07:10.936366 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:07:10.936373 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:07:10.936380 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:07:10.936387 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:07:10.936393 kernel: iommu: Default domain type: Translated Aug 13 01:07:10.936400 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:07:10.936407 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:07:10.936414 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:07:10.936420 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:07:10.936429 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:07:10.936547 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:07:10.936665 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:07:10.938841 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:07:10.938856 kernel: vgaarb: loaded Aug 13 01:07:10.938864 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:07:10.938871 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:07:10.938878 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:07:10.938889 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:07:10.938896 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:07:10.938903 kernel: pnp: PnP ACPI init Aug 13 01:07:10.939032 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:07:10.939043 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:07:10.939050 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:07:10.939057 kernel: NET: Registered PF_INET protocol family Aug 13 01:07:10.939064 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:07:10.939071 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:07:10.939082 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:07:10.939089 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:07:10.939096 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:07:10.939102 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:07:10.939109 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:07:10.939116 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:07:10.939123 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:07:10.939130 kernel: NET: Registered PF_XDP protocol family Aug 13 01:07:10.939239 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:07:10.939374 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:07:10.939488 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:07:10.939592 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:07:10.939694 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:07:10.941014 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:07:10.941025 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:07:10.941033 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:07:10.941040 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:07:10.941051 kernel: Initialise system trusted keyrings Aug 13 01:07:10.941058 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:07:10.941065 kernel: Key type asymmetric registered Aug 13 01:07:10.941072 kernel: Asymmetric key parser 'x509' registered Aug 13 01:07:10.941079 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 01:07:10.941086 kernel: io scheduler mq-deadline registered Aug 13 01:07:10.941092 kernel: io scheduler kyber registered Aug 13 01:07:10.941099 kernel: io scheduler bfq registered Aug 13 01:07:10.941106 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:07:10.941116 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:07:10.941123 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:07:10.941130 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:07:10.941137 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:07:10.941144 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:07:10.941150 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:07:10.941157 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:07:10.941282 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:07:10.941296 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:07:10.941404 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:07:10.941511 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:07:10 UTC (1755047230) Aug 13 01:07:10.941617 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:07:10.941627 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:07:10.941634 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:07:10.941640 kernel: Segment Routing with IPv6 Aug 13 01:07:10.941647 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:07:10.941654 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:07:10.941665 kernel: Key type dns_resolver registered Aug 13 01:07:10.941671 kernel: IPI shorthand broadcast: enabled Aug 13 01:07:10.943360 kernel: sched_clock: Marking stable (734003042, 212594031)->(993770146, -47173073) Aug 13 01:07:10.943373 kernel: registered taskstats version 1 Aug 13 01:07:10.943380 kernel: Loading compiled-in X.509 certificates Aug 13 01:07:10.943387 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 01:07:10.943394 kernel: Key type .fscrypt registered Aug 13 01:07:10.943401 kernel: Key type fscrypt-provisioning registered Aug 13 01:07:10.943412 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:07:10.943419 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:07:10.943425 kernel: ima: No architecture policies found Aug 13 01:07:10.943432 kernel: clk: Disabling unused clocks Aug 13 01:07:10.943439 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 01:07:10.943446 kernel: Write protecting the kernel read-only data: 38912k Aug 13 01:07:10.943452 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 01:07:10.943459 kernel: Run /init as init process Aug 13 01:07:10.943466 kernel: with arguments: Aug 13 01:07:10.943472 kernel: /init Aug 13 01:07:10.943481 kernel: with environment: Aug 13 01:07:10.943488 kernel: HOME=/ Aug 13 01:07:10.943494 kernel: TERM=linux Aug 13 01:07:10.943501 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:07:10.943509 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:07:10.943519 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:07:10.943527 systemd[1]: Detected virtualization kvm. Aug 13 01:07:10.943537 systemd[1]: Detected architecture x86-64. Aug 13 01:07:10.943544 systemd[1]: Running in initrd. Aug 13 01:07:10.943551 systemd[1]: No hostname configured, using default hostname. Aug 13 01:07:10.943559 systemd[1]: Hostname set to . Aug 13 01:07:10.943566 systemd[1]: Initializing machine ID from random generator. Aug 13 01:07:10.943587 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:07:10.943600 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:07:10.943608 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:07:10.943616 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:07:10.943624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:07:10.943631 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:07:10.943640 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:07:10.943649 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:07:10.943659 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:07:10.943667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:07:10.943675 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:07:10.943682 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:07:10.943690 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:07:10.943711 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:07:10.943730 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:07:10.943737 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:07:10.943748 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:07:10.943756 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:07:10.943764 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:07:10.943771 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:07:10.943779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:07:10.943786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:07:10.943794 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:07:10.943801 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:07:10.943809 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:07:10.943820 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:07:10.943827 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:07:10.943835 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:07:10.943842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:07:10.943850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:07:10.943881 systemd-journald[178]: Collecting audit messages is disabled. Aug 13 01:07:10.943902 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:07:10.943913 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:07:10.943923 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:07:10.943931 systemd-journald[178]: Journal started Aug 13 01:07:10.943948 systemd-journald[178]: Runtime Journal (/run/log/journal/bc3ec9f7366c4f0c9f6c2722741660e4) is 8M, max 78.3M, 70.3M free. Aug 13 01:07:10.937481 systemd-modules-load[179]: Inserted module 'overlay' Aug 13 01:07:10.965661 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:07:10.965685 kernel: Bridge firewalling registered Aug 13 01:07:10.965751 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 13 01:07:11.001911 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:07:11.003357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:07:11.004151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:07:11.015899 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:07:11.017861 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:07:11.021444 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:07:11.024865 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:07:11.037374 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:07:11.054151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:07:11.057991 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:07:11.062588 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:07:11.068988 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:07:11.072853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:07:11.076871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:07:11.082618 dracut-cmdline[206]: dracut-dracut-053 Aug 13 01:07:11.090677 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:07:11.091045 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:07:11.114500 systemd-resolved[211]: Positive Trust Anchors: Aug 13 01:07:11.114515 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:07:11.114541 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:07:11.120761 systemd-resolved[211]: Defaulting to hostname 'linux'. Aug 13 01:07:11.121759 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:07:11.122795 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:07:11.170746 kernel: SCSI subsystem initialized Aug 13 01:07:11.179738 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:07:11.190731 kernel: iscsi: registered transport (tcp) Aug 13 01:07:11.211289 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:07:11.211330 kernel: QLogic iSCSI HBA Driver Aug 13 01:07:11.262799 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:07:11.267872 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:07:11.297258 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:07:11.297303 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:07:11.297970 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 01:07:11.340730 kernel: raid6: avx2x4 gen() 32730 MB/s Aug 13 01:07:11.358725 kernel: raid6: avx2x2 gen() 30090 MB/s Aug 13 01:07:11.377256 kernel: raid6: avx2x1 gen() 21291 MB/s Aug 13 01:07:11.377278 kernel: raid6: using algorithm avx2x4 gen() 32730 MB/s Aug 13 01:07:11.396245 kernel: raid6: .... xor() 4749 MB/s, rmw enabled Aug 13 01:07:11.396282 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:07:11.415727 kernel: xor: automatically using best checksumming function avx Aug 13 01:07:11.550739 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:07:11.565548 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:07:11.572863 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:07:11.595587 systemd-udevd[395]: Using default interface naming scheme 'v255'. Aug 13 01:07:11.601631 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:07:11.608827 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:07:11.625747 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Aug 13 01:07:11.661047 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:07:11.671035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:07:11.733224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:07:11.740311 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:07:11.759173 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:07:11.762591 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:07:11.764391 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:07:11.765643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:07:11.772910 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:07:11.785215 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:07:11.808715 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:07:11.931725 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:07:11.940719 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:07:11.944741 kernel: libata version 3.00 loaded. Aug 13 01:07:11.958486 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:07:11.958604 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:07:11.960568 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:07:11.961103 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:07:11.961219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:07:11.964477 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:07:11.971981 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:07:11.972000 kernel: AES CTR mode by8 optimization enabled Aug 13 01:07:11.974904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:07:12.034091 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:07:12.034285 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:07:12.034298 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 01:07:12.034439 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:07:12.034574 kernel: scsi host1: ahci Aug 13 01:07:12.034738 kernel: scsi host2: ahci Aug 13 01:07:12.034902 kernel: scsi host3: ahci Aug 13 01:07:12.035211 kernel: scsi host4: ahci Aug 13 01:07:12.035344 kernel: scsi host5: ahci Aug 13 01:07:12.035476 kernel: scsi host6: ahci Aug 13 01:07:12.035611 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Aug 13 01:07:12.035621 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Aug 13 01:07:12.035631 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Aug 13 01:07:12.035645 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Aug 13 01:07:12.035654 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Aug 13 01:07:12.035664 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Aug 13 01:07:11.977466 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:07:12.088760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:07:12.094858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:07:12.113074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:07:12.354220 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:07:12.354287 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:07:12.354299 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:07:12.354320 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:07:12.354330 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:07:12.354339 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:07:12.366646 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:07:12.370039 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:07:12.370211 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:07:12.370354 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:07:12.370492 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:07:12.377090 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:07:12.377114 kernel: GPT:9289727 != 9297919 Aug 13 01:07:12.377126 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:07:12.378488 kernel: GPT:9289727 != 9297919 Aug 13 01:07:12.379727 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:07:12.380962 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:07:12.383528 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:07:12.422721 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/sda3 scanned by (udev-worker) (443) Aug 13 01:07:12.426252 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (457) Aug 13 01:07:12.433061 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:07:12.448718 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:07:12.455801 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:07:12.456373 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:07:12.465283 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:07:12.470826 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:07:12.475194 disk-uuid[569]: Primary Header is updated. Aug 13 01:07:12.475194 disk-uuid[569]: Secondary Entries is updated. Aug 13 01:07:12.475194 disk-uuid[569]: Secondary Header is updated. Aug 13 01:07:12.480731 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:07:12.485733 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:07:13.489775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:07:13.493799 disk-uuid[570]: The operation has completed successfully. Aug 13 01:07:13.541004 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:07:13.541123 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:07:13.573809 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:07:13.576725 sh[584]: Success Aug 13 01:07:13.588754 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 01:07:13.632373 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:07:13.641509 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:07:13.642371 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:07:13.661200 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 01:07:13.661235 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:07:13.663437 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 01:07:13.665850 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 01:07:13.667782 kernel: BTRFS info (device dm-0): using free space tree Aug 13 01:07:13.676715 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 01:07:13.677389 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:07:13.678512 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:07:13.683801 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:07:13.687944 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:07:13.705087 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:07:13.705119 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:07:13.708134 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:07:13.714425 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:07:13.714449 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:07:13.720751 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:07:13.723147 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:07:13.728854 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:07:13.814217 ignition[685]: Ignition 2.20.0 Aug 13 01:07:13.814309 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:07:13.814224 ignition[685]: Stage: fetch-offline Aug 13 01:07:13.814254 ignition[685]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:07:13.814304 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:07:13.814418 ignition[685]: parsed url from cmdline: "" Aug 13 01:07:13.814422 ignition[685]: no config URL provided Aug 13 01:07:13.814427 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:07:13.814437 ignition[685]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:07:13.814441 ignition[685]: failed to fetch config: resource requires networking Aug 13 01:07:13.814839 ignition[685]: Ignition finished successfully Aug 13 01:07:13.824028 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:07:13.826398 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:07:13.851300 systemd-networkd[768]: lo: Link UP Aug 13 01:07:13.851312 systemd-networkd[768]: lo: Gained carrier Aug 13 01:07:13.852880 systemd-networkd[768]: Enumeration completed Aug 13 01:07:13.853229 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:07:13.853234 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:07:13.853957 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:07:13.855072 systemd[1]: Reached target network.target - Network. Aug 13 01:07:13.855812 systemd-networkd[768]: eth0: Link UP Aug 13 01:07:13.855816 systemd-networkd[768]: eth0: Gained carrier Aug 13 01:07:13.855823 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:07:13.862837 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:07:13.873376 ignition[772]: Ignition 2.20.0 Aug 13 01:07:13.873390 ignition[772]: Stage: fetch Aug 13 01:07:13.873528 ignition[772]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:07:13.873538 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:07:13.873616 ignition[772]: parsed url from cmdline: "" Aug 13 01:07:13.873620 ignition[772]: no config URL provided Aug 13 01:07:13.873625 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:07:13.873633 ignition[772]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:07:13.873654 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:07:13.873841 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:07:14.073944 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:07:14.074139 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:07:14.378772 systemd-networkd[768]: eth0: DHCPv4 address 172.234.214.191/24, gateway 172.234.214.1 acquired from 23.205.167.149 Aug 13 01:07:14.474544 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:07:14.570442 ignition[772]: PUT result: OK Aug 13 01:07:14.570490 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:07:14.694938 ignition[772]: GET result: OK Aug 13 01:07:14.694999 ignition[772]: parsing config with SHA512: d2a8b9b597c4cef799efa2df811a6a0df0889bbc3c0b6a66ac84d81cb01e10757cc18fccbc758d9da08e67cc971f6ccf9370b787dec391118a577dd272a2af4e Aug 13 01:07:14.697793 unknown[772]: fetched base config from "system" Aug 13 01:07:14.698167 ignition[772]: fetch: fetch complete Aug 13 01:07:14.697802 unknown[772]: fetched base config from "system" Aug 13 01:07:14.698173 ignition[772]: fetch: fetch passed Aug 13 01:07:14.697808 unknown[772]: fetched user config from "akamai" Aug 13 01:07:14.698214 ignition[772]: Ignition finished successfully Aug 13 01:07:14.701655 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:07:14.706837 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:07:14.721325 ignition[780]: Ignition 2.20.0 Aug 13 01:07:14.721335 ignition[780]: Stage: kargs Aug 13 01:07:14.721473 ignition[780]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:07:14.721483 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:07:14.722891 ignition[780]: kargs: kargs passed Aug 13 01:07:14.724964 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:07:14.722929 ignition[780]: Ignition finished successfully Aug 13 01:07:14.730822 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:07:14.751490 ignition[786]: Ignition 2.20.0 Aug 13 01:07:14.751503 ignition[786]: Stage: disks Aug 13 01:07:14.751648 ignition[786]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:07:14.751659 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:07:14.753806 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:07:14.752456 ignition[786]: disks: disks passed Aug 13 01:07:14.778807 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:07:14.752491 ignition[786]: Ignition finished successfully Aug 13 01:07:14.779650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:07:14.780439 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:07:14.781517 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:07:14.782486 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:07:14.796847 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:07:14.812263 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 01:07:14.814099 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:07:14.819782 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:07:14.892722 kernel: EXT4-fs (sda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 01:07:14.893347 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:07:14.894320 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:07:14.905772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:07:14.908455 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:07:14.909905 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:07:14.910891 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:07:14.910918 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:07:14.914131 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:07:14.915908 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:07:14.923715 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (802) Aug 13 01:07:14.926782 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:07:14.926803 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:07:14.929514 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:07:14.935341 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:07:14.935363 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:07:14.938220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:07:14.973590 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:07:14.979115 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:07:14.982556 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:07:14.986862 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:07:15.078935 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:07:15.082787 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:07:15.085829 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:07:15.094491 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:07:15.098745 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:07:15.113389 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:07:15.119168 ignition[916]: INFO : Ignition 2.20.0 Aug 13 01:07:15.119856 ignition[916]: INFO : Stage: mount Aug 13 01:07:15.120340 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:07:15.120340 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:07:15.121824 ignition[916]: INFO : mount: mount passed Aug 13 01:07:15.121824 ignition[916]: INFO : Ignition finished successfully Aug 13 01:07:15.121675 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:07:15.127785 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:07:15.309890 systemd-networkd[768]: eth0: Gained IPv6LL Aug 13 01:07:15.898822 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:07:15.910722 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (926) Aug 13 01:07:15.913751 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:07:15.913774 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:07:15.915840 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:07:15.921698 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:07:15.921728 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:07:15.923998 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:07:15.941460 ignition[943]: INFO : Ignition 2.20.0 Aug 13 01:07:15.942539 ignition[943]: INFO : Stage: files Aug 13 01:07:15.942539 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:07:15.942539 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:07:15.944856 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:07:15.944856 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:07:15.944856 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:07:15.947379 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:07:15.947379 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:07:15.947379 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:07:15.946928 unknown[943]: wrote ssh authorized keys file for user: core Aug 13 01:07:15.950521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:07:15.950521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:07:15.950521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:07:15.950521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:07:15.950521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:07:15.950521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:07:15.950521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:07:15.950521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 01:07:16.372065 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Aug 13 01:07:16.741173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:07:16.741173 ignition[943]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Aug 13 01:07:16.743546 ignition[943]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:07:16.743546 ignition[943]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:07:16.743546 ignition[943]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Aug 13 01:07:16.743546 ignition[943]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:07:16.743546 ignition[943]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:07:16.743546 ignition[943]: INFO : files: files passed Aug 13 01:07:16.743546 ignition[943]: INFO : Ignition finished successfully Aug 13 01:07:16.745985 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:07:16.751885 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:07:16.754852 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:07:16.758594 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:07:16.759427 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:07:16.783938 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:07:16.785364 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:07:16.785364 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:07:16.787201 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:07:16.785514 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:07:16.791889 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:07:16.817743 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:07:16.817887 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:07:16.819475 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:07:16.820149 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:07:16.820687 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:07:16.829826 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:07:16.840881 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:07:16.845830 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:07:16.854636 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:07:16.855968 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:07:16.856581 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:07:16.857214 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:07:16.857317 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:07:16.858522 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:07:16.859430 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:07:16.860413 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:07:16.861422 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:07:16.862640 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:07:16.863835 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:07:16.864828 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:07:16.866304 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:07:16.867485 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:07:16.868624 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:07:16.869728 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:07:16.869853 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:07:16.871359 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:07:16.872090 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:07:16.873186 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:07:16.873274 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:07:16.874551 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:07:16.874681 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:07:16.877875 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:07:16.877980 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:07:16.878903 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:07:16.879058 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:07:16.886832 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:07:16.887347 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:07:16.887449 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:07:16.893901 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:07:16.894781 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:07:16.894958 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:07:16.896920 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:07:16.897417 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:07:16.907330 ignition[995]: INFO : Ignition 2.20.0 Aug 13 01:07:16.907330 ignition[995]: INFO : Stage: umount Aug 13 01:07:16.907330 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:07:16.907330 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:07:16.907330 ignition[995]: INFO : umount: umount passed Aug 13 01:07:16.907330 ignition[995]: INFO : Ignition finished successfully Aug 13 01:07:16.903967 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:07:16.904071 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:07:16.909021 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:07:16.909314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:07:16.912307 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:07:16.912357 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:07:16.914242 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:07:16.914291 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:07:16.916099 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:07:16.916145 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:07:16.916712 systemd[1]: Stopped target network.target - Network. Aug 13 01:07:16.917300 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:07:16.917349 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:07:16.919823 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:07:16.920315 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:07:16.922181 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:07:16.923296 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:07:16.924325 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:07:16.927068 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:07:16.927116 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:07:16.927777 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:07:16.927819 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:07:16.930975 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:07:16.931055 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:07:16.931975 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:07:16.932023 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:07:16.932743 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:07:16.933813 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:07:16.937668 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:07:16.942899 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:07:16.943023 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:07:16.947570 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:07:16.948551 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:07:16.948680 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:07:16.951018 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:07:16.951419 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:07:16.951600 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:07:16.974922 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:07:16.974992 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:07:16.977336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:07:16.977392 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:07:16.987809 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:07:16.989866 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:07:16.989925 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:07:16.991014 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:07:16.991065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:07:16.992622 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:07:16.992671 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:07:16.993384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:07:16.993431 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:07:16.994844 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:07:16.997948 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:07:16.998013 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:07:17.008592 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:07:17.008724 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:07:17.013547 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:07:17.013753 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:07:17.015434 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:07:17.015480 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:07:17.016564 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:07:17.016600 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:07:17.017786 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:07:17.017840 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:07:17.019607 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:07:17.019655 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:07:17.020986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:07:17.021034 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:07:17.031880 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:07:17.033127 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:07:17.033187 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:07:17.034610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:07:17.034661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:07:17.038017 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:07:17.038083 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:07:17.038420 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:07:17.038526 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:07:17.039901 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:07:17.046861 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:07:17.053853 systemd[1]: Switching root. Aug 13 01:07:17.084816 systemd-journald[178]: Journal stopped Aug 13 01:07:18.145133 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Aug 13 01:07:18.145157 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:07:18.145169 kernel: SELinux: policy capability open_perms=1 Aug 13 01:07:18.145178 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:07:18.145187 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:07:18.145199 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:07:18.145210 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:07:18.145219 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:07:18.145228 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:07:18.145237 kernel: audit: type=1403 audit(1755047237.203:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:07:18.145247 systemd[1]: Successfully loaded SELinux policy in 49.147ms. Aug 13 01:07:18.145259 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.992ms. Aug 13 01:07:18.145270 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:07:18.145281 systemd[1]: Detected virtualization kvm. Aug 13 01:07:18.145291 systemd[1]: Detected architecture x86-64. Aug 13 01:07:18.145300 systemd[1]: Detected first boot. Aug 13 01:07:18.145312 systemd[1]: Initializing machine ID from random generator. Aug 13 01:07:18.145322 zram_generator::config[1041]: No configuration found. Aug 13 01:07:18.145332 kernel: Guest personality initialized and is inactive Aug 13 01:07:18.145342 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:07:18.145351 kernel: Initialized host personality Aug 13 01:07:18.145360 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:07:18.145369 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:07:18.145382 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:07:18.145392 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:07:18.145401 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:07:18.145411 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:07:18.145422 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:07:18.145431 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:07:18.145441 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:07:18.145453 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:07:18.145463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:07:18.145473 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:07:18.145483 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:07:18.145493 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:07:18.145502 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:07:18.145512 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:07:18.145522 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:07:18.145532 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:07:18.145544 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:07:18.145557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:07:18.145567 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:07:18.145577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:07:18.145587 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:07:18.145597 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:07:18.145608 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:07:18.145620 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:07:18.145630 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:07:18.145641 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:07:18.145650 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:07:18.145660 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:07:18.145670 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:07:18.145680 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:07:18.145690 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:07:18.145744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:07:18.145763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:07:18.145774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:07:18.145784 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:07:18.145794 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:07:18.145806 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:07:18.145816 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:07:18.145827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:07:18.145837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:07:18.145847 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:07:18.145857 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:07:18.145868 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:07:18.145878 systemd[1]: Reached target machines.target - Containers. Aug 13 01:07:18.145890 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:07:18.145901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:07:18.145912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:07:18.145922 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:07:18.145932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:07:18.145942 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:07:18.145952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:07:18.145962 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:07:18.145972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:07:18.145985 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:07:18.145995 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:07:18.146005 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:07:18.146015 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:07:18.146025 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:07:18.146036 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:07:18.146046 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:07:18.146056 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:07:18.146068 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:07:18.146078 kernel: loop: module loaded Aug 13 01:07:18.146088 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:07:18.146099 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:07:18.146304 kernel: fuse: init (API version 7.39) Aug 13 01:07:18.146313 kernel: ACPI: bus type drm_connector registered Aug 13 01:07:18.146324 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:07:18.146352 systemd-journald[1128]: Collecting audit messages is disabled. Aug 13 01:07:18.146375 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:07:18.146385 systemd[1]: Stopped verity-setup.service. Aug 13 01:07:18.146396 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:07:18.146406 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:07:18.146419 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:07:18.146429 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:07:18.146439 systemd-journald[1128]: Journal started Aug 13 01:07:18.146458 systemd-journald[1128]: Runtime Journal (/run/log/journal/38ea4fce7f8f4e9e8312160ba2e9ead8) is 8M, max 78.3M, 70.3M free. Aug 13 01:07:17.800056 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:07:18.149920 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:07:17.811954 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:07:17.812412 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:07:18.150342 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:07:18.151028 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:07:18.152765 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:07:18.153646 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:07:18.154524 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:07:18.155470 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:07:18.155681 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:07:18.156932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:07:18.157197 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:07:18.158332 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:07:18.158596 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:07:18.159461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:07:18.159864 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:07:18.160905 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:07:18.161363 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:07:18.162535 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:07:18.162830 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:07:18.163859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:07:18.164743 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:07:18.165666 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:07:18.166723 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:07:18.179553 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:07:18.186818 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:07:18.191301 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:07:18.191951 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:07:18.192035 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:07:18.193422 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:07:18.205309 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:07:18.208841 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:07:18.209467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:07:18.213762 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:07:18.222163 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:07:18.223399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:07:18.224669 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:07:18.225286 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:07:18.227199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:07:18.247825 systemd-journald[1128]: Time spent on flushing to /var/log/journal/38ea4fce7f8f4e9e8312160ba2e9ead8 is 94.437ms for 968 entries. Aug 13 01:07:18.247825 systemd-journald[1128]: System Journal (/var/log/journal/38ea4fce7f8f4e9e8312160ba2e9ead8) is 8M, max 195.6M, 187.6M free. Aug 13 01:07:18.361228 systemd-journald[1128]: Received client request to flush runtime journal. Aug 13 01:07:18.361274 kernel: loop0: detected capacity change from 0 to 8 Aug 13 01:07:18.361458 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:07:18.267677 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:07:18.278813 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:07:18.282864 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:07:18.367078 kernel: loop1: detected capacity change from 0 to 138176 Aug 13 01:07:18.284906 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:07:18.287128 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:07:18.309695 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:07:18.312223 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:07:18.314136 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:07:18.323004 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:07:18.333280 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 01:07:18.357809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:07:18.366002 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 01:07:18.369084 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:07:18.371243 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:07:18.377372 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:07:18.385981 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:07:18.418777 kernel: loop2: detected capacity change from 0 to 229808 Aug 13 01:07:18.421492 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Aug 13 01:07:18.421511 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Aug 13 01:07:18.426655 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:07:18.478726 kernel: loop3: detected capacity change from 0 to 147912 Aug 13 01:07:18.523735 kernel: loop4: detected capacity change from 0 to 8 Aug 13 01:07:18.529737 kernel: loop5: detected capacity change from 0 to 138176 Aug 13 01:07:18.552733 kernel: loop6: detected capacity change from 0 to 229808 Aug 13 01:07:18.580726 kernel: loop7: detected capacity change from 0 to 147912 Aug 13 01:07:18.603532 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:07:18.604211 (sd-merge)[1193]: Merged extensions into '/usr'. Aug 13 01:07:18.616864 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:07:18.617126 systemd[1]: Reloading... Aug 13 01:07:18.711759 zram_generator::config[1221]: No configuration found. Aug 13 01:07:18.799288 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:07:18.865371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:07:18.954540 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:07:18.955511 systemd[1]: Reloading finished in 337 ms. Aug 13 01:07:18.974478 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:07:18.975872 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:07:18.976921 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:07:18.987077 systemd[1]: Starting ensure-sysext.service... Aug 13 01:07:18.991018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:07:19.000889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:07:19.019385 systemd[1]: Reload requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:07:19.019402 systemd[1]: Reloading... Aug 13 01:07:19.037541 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:07:19.037827 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:07:19.038666 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:07:19.041348 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 01:07:19.041473 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 01:07:19.047629 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:07:19.047725 systemd-tmpfiles[1266]: Skipping /boot Aug 13 01:07:19.052392 systemd-udevd[1267]: Using default interface naming scheme 'v255'. Aug 13 01:07:19.066071 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:07:19.066841 systemd-tmpfiles[1266]: Skipping /boot Aug 13 01:07:19.125754 zram_generator::config[1294]: No configuration found. Aug 13 01:07:19.269760 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1316) Aug 13 01:07:19.338781 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:07:19.355132 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:07:19.355588 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:07:19.355609 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 01:07:19.358811 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:07:19.374802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:07:19.403067 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:07:19.413735 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:07:19.431845 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:07:19.478978 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:07:19.479766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:07:19.480157 systemd[1]: Reloading finished in 460 ms. Aug 13 01:07:19.491133 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:07:19.503394 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:07:19.520652 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 01:07:19.527613 systemd[1]: Finished ensure-sysext.service. Aug 13 01:07:19.551651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:07:19.557828 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:07:19.561840 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:07:19.564280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:07:19.571557 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 01:07:19.575093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:07:19.580340 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:07:19.583327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:07:19.586984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:07:19.591907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:07:19.600865 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:07:19.603413 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:07:19.607194 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:07:19.608259 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:07:19.615890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:07:19.626852 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:07:19.647998 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:07:19.651901 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:07:19.654611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:07:19.656041 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:07:19.657310 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 01:07:19.658539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:07:19.659257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:07:19.660753 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:07:19.661024 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:07:19.662348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:07:19.663072 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:07:19.664389 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:07:19.664932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:07:19.671311 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:07:19.679165 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:07:19.688893 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 01:07:19.689813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:07:19.689882 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:07:19.692867 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:07:19.696566 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:07:19.706545 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:07:19.710246 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:07:19.713169 augenrules[1419]: No rules Aug 13 01:07:19.718769 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:07:19.720030 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:07:19.721499 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:07:19.722451 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:07:19.731181 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:07:19.744088 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:07:19.754629 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 01:07:19.770180 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:07:19.833601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:07:19.887616 systemd-networkd[1393]: lo: Link UP Aug 13 01:07:19.887623 systemd-networkd[1393]: lo: Gained carrier Aug 13 01:07:19.889538 systemd-networkd[1393]: Enumeration completed Aug 13 01:07:19.889671 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:07:19.891871 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:07:19.891933 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:07:19.892555 systemd-networkd[1393]: eth0: Link UP Aug 13 01:07:19.892653 systemd-networkd[1393]: eth0: Gained carrier Aug 13 01:07:19.892728 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:07:19.899809 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:07:19.902427 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:07:19.903417 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:07:19.904812 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:07:19.905484 systemd-resolved[1394]: Positive Trust Anchors: Aug 13 01:07:19.905492 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:07:19.905519 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:07:19.911862 systemd-resolved[1394]: Defaulting to hostname 'linux'. Aug 13 01:07:19.914353 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:07:19.915017 systemd[1]: Reached target network.target - Network. Aug 13 01:07:19.915503 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:07:19.916404 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:07:19.917221 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:07:19.918207 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:07:19.919005 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:07:19.919738 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:07:19.920456 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:07:19.921299 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:07:19.921327 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:07:19.921826 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:07:19.924738 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:07:19.926991 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:07:19.930595 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:07:19.931494 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:07:19.932288 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:07:19.935684 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:07:19.936774 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:07:19.938352 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:07:19.939292 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:07:19.940685 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:07:19.941338 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:07:19.942084 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:07:19.942123 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:07:19.946789 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:07:19.951433 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:07:19.954693 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:07:19.957814 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:07:19.961892 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:07:19.962619 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:07:19.974549 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:07:19.977860 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:07:19.988835 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:07:19.993280 jq[1449]: false Aug 13 01:07:20.000981 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:07:20.004888 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:07:20.005550 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:07:20.009853 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:07:20.016778 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:07:20.020367 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:07:20.021300 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:07:20.021659 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:07:20.021985 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:07:20.032579 extend-filesystems[1450]: Found loop4 Aug 13 01:07:20.033321 extend-filesystems[1450]: Found loop5 Aug 13 01:07:20.033321 extend-filesystems[1450]: Found loop6 Aug 13 01:07:20.033321 extend-filesystems[1450]: Found loop7 Aug 13 01:07:20.033321 extend-filesystems[1450]: Found sda Aug 13 01:07:20.042841 extend-filesystems[1450]: Found sda1 Aug 13 01:07:20.042841 extend-filesystems[1450]: Found sda2 Aug 13 01:07:20.042841 extend-filesystems[1450]: Found sda3 Aug 13 01:07:20.042841 extend-filesystems[1450]: Found usr Aug 13 01:07:20.042841 extend-filesystems[1450]: Found sda4 Aug 13 01:07:20.042841 extend-filesystems[1450]: Found sda6 Aug 13 01:07:20.042841 extend-filesystems[1450]: Found sda7 Aug 13 01:07:20.042841 extend-filesystems[1450]: Found sda9 Aug 13 01:07:20.042841 extend-filesystems[1450]: Checking size of /dev/sda9 Aug 13 01:07:20.063830 jq[1463]: true Aug 13 01:07:20.063512 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:07:20.063766 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:07:20.074103 jq[1473]: true Aug 13 01:07:20.071046 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:07:20.075152 update_engine[1460]: I20250813 01:07:20.075076 1460 main.cc:92] Flatcar Update Engine starting Aug 13 01:07:20.077345 dbus-daemon[1448]: [system] SELinux support is enabled Aug 13 01:07:20.079994 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:07:20.085642 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:07:20.086222 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:07:20.087338 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:07:20.087356 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:07:20.095973 coreos-metadata[1447]: Aug 13 01:07:20.095 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:07:20.097171 extend-filesystems[1450]: Resized partition /dev/sda9 Aug 13 01:07:20.100833 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:07:20.106783 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Aug 13 01:07:20.111134 update_engine[1460]: I20250813 01:07:20.101315 1460 update_check_scheduler.cc:74] Next update check in 3m23s Aug 13 01:07:20.108904 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:07:20.116723 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:07:20.124747 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:07:20.140489 extend-filesystems[1487]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:07:20.140489 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:07:20.140489 extend-filesystems[1487]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:07:20.149546 extend-filesystems[1450]: Resized filesystem in /dev/sda9 Aug 13 01:07:20.145177 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:07:20.145447 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:07:20.184343 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:07:20.185383 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:07:20.196866 systemd[1]: Starting sshkeys.service... Aug 13 01:07:20.201563 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1305) Aug 13 01:07:20.202890 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:07:20.203329 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:07:20.203787 systemd-logind[1456]: New seat seat0. Aug 13 01:07:20.204874 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:07:20.242579 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:07:20.251409 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:07:20.329340 coreos-metadata[1512]: Aug 13 01:07:20.328 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:07:20.330444 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:07:20.391726 containerd[1478]: time="2025-08-13T01:07:20.391572083Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 01:07:20.415259 containerd[1478]: time="2025-08-13T01:07:20.414753594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417193655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417229355Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417247955Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417428056Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417448146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417523246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417536956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417824766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417845206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417862216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418650 containerd[1478]: time="2025-08-13T01:07:20.417874716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418942 containerd[1478]: time="2025-08-13T01:07:20.417983266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418942 containerd[1478]: time="2025-08-13T01:07:20.418248766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418942 containerd[1478]: time="2025-08-13T01:07:20.418422726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:07:20.418942 containerd[1478]: time="2025-08-13T01:07:20.418438106Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:07:20.418942 containerd[1478]: time="2025-08-13T01:07:20.418548676Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:07:20.418942 containerd[1478]: time="2025-08-13T01:07:20.418615766Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:07:20.420966 containerd[1478]: time="2025-08-13T01:07:20.420946877Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:07:20.421043 containerd[1478]: time="2025-08-13T01:07:20.421030807Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:07:20.421110 containerd[1478]: time="2025-08-13T01:07:20.421098677Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 01:07:20.421166 containerd[1478]: time="2025-08-13T01:07:20.421154387Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 01:07:20.421211 containerd[1478]: time="2025-08-13T01:07:20.421199967Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:07:20.421358 containerd[1478]: time="2025-08-13T01:07:20.421341508Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:07:20.421651 containerd[1478]: time="2025-08-13T01:07:20.421621918Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:07:20.421823 containerd[1478]: time="2025-08-13T01:07:20.421806148Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 01:07:20.421878 containerd[1478]: time="2025-08-13T01:07:20.421865868Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 01:07:20.421924 containerd[1478]: time="2025-08-13T01:07:20.421912998Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 01:07:20.421968 containerd[1478]: time="2025-08-13T01:07:20.421957608Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:07:20.422011 containerd[1478]: time="2025-08-13T01:07:20.422000458Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:07:20.422066 containerd[1478]: time="2025-08-13T01:07:20.422053418Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:07:20.422110 containerd[1478]: time="2025-08-13T01:07:20.422098938Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:07:20.422153 containerd[1478]: time="2025-08-13T01:07:20.422142208Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:07:20.422194 containerd[1478]: time="2025-08-13T01:07:20.422183868Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:07:20.422260 containerd[1478]: time="2025-08-13T01:07:20.422233128Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:07:20.422304 containerd[1478]: time="2025-08-13T01:07:20.422293758Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:07:20.422356 containerd[1478]: time="2025-08-13T01:07:20.422345108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422400 containerd[1478]: time="2025-08-13T01:07:20.422389478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422475 containerd[1478]: time="2025-08-13T01:07:20.422462958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422529 containerd[1478]: time="2025-08-13T01:07:20.422516788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422581 containerd[1478]: time="2025-08-13T01:07:20.422569878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422625 containerd[1478]: time="2025-08-13T01:07:20.422614418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422665 containerd[1478]: time="2025-08-13T01:07:20.422655098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422724 containerd[1478]: time="2025-08-13T01:07:20.422695708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422786 containerd[1478]: time="2025-08-13T01:07:20.422773358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422833 containerd[1478]: time="2025-08-13T01:07:20.422822788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422873 containerd[1478]: time="2025-08-13T01:07:20.422863788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422915 containerd[1478]: time="2025-08-13T01:07:20.422903398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.422973 containerd[1478]: time="2025-08-13T01:07:20.422956678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.423032 containerd[1478]: time="2025-08-13T01:07:20.423019218Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 01:07:20.423100 containerd[1478]: time="2025-08-13T01:07:20.423086888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.423158 containerd[1478]: time="2025-08-13T01:07:20.423146388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.423201 containerd[1478]: time="2025-08-13T01:07:20.423190468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:07:20.423285 containerd[1478]: time="2025-08-13T01:07:20.423273328Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:07:20.423393 containerd[1478]: time="2025-08-13T01:07:20.423377359Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 01:07:20.423440 containerd[1478]: time="2025-08-13T01:07:20.423429099Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:07:20.423482 containerd[1478]: time="2025-08-13T01:07:20.423470509Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 01:07:20.423529 containerd[1478]: time="2025-08-13T01:07:20.423517699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.423589 containerd[1478]: time="2025-08-13T01:07:20.423570779Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 01:07:20.423638 containerd[1478]: time="2025-08-13T01:07:20.423626619Z" level=info msg="NRI interface is disabled by configuration." Aug 13 01:07:20.423678 containerd[1478]: time="2025-08-13T01:07:20.423667939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:07:20.423999 containerd[1478]: time="2025-08-13T01:07:20.423959239Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:07:20.424165 containerd[1478]: time="2025-08-13T01:07:20.424150929Z" level=info msg="Connect containerd service" Aug 13 01:07:20.424225 containerd[1478]: time="2025-08-13T01:07:20.424213629Z" level=info msg="using legacy CRI server" Aug 13 01:07:20.424262 containerd[1478]: time="2025-08-13T01:07:20.424252529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:07:20.424383 containerd[1478]: time="2025-08-13T01:07:20.424370769Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:07:20.425036 containerd[1478]: time="2025-08-13T01:07:20.425015919Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:07:20.425192 containerd[1478]: time="2025-08-13T01:07:20.425169129Z" level=info msg="Start subscribing containerd event" Aug 13 01:07:20.425252 containerd[1478]: time="2025-08-13T01:07:20.425240649Z" level=info msg="Start recovering state" Aug 13 01:07:20.425351 containerd[1478]: time="2025-08-13T01:07:20.425338130Z" level=info msg="Start event monitor" Aug 13 01:07:20.425402 containerd[1478]: time="2025-08-13T01:07:20.425391280Z" level=info msg="Start snapshots syncer" Aug 13 01:07:20.425440 containerd[1478]: time="2025-08-13T01:07:20.425430790Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:07:20.425477 containerd[1478]: time="2025-08-13T01:07:20.425468180Z" level=info msg="Start streaming server" Aug 13 01:07:20.425852 containerd[1478]: time="2025-08-13T01:07:20.425835530Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:07:20.425945 containerd[1478]: time="2025-08-13T01:07:20.425931200Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:07:20.427716 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:07:20.429540 containerd[1478]: time="2025-08-13T01:07:20.429308802Z" level=info msg="containerd successfully booted in 0.039375s" Aug 13 01:07:20.453780 systemd-networkd[1393]: eth0: DHCPv4 address 172.234.214.191/24, gateway 172.234.214.1 acquired from 23.205.167.149 Aug 13 01:07:20.453864 dbus-daemon[1448]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1393 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:07:20.455736 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Aug 13 01:07:20.463900 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:07:20.485721 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:07:20.509430 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:07:20.517456 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:07:20.526370 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:07:20.526626 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:07:20.530069 dbus-daemon[1448]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:07:20.530508 dbus-daemon[1448]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1525 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:07:20.535375 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:07:20.536806 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:07:20.543891 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:07:20.545507 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:07:20.551873 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:07:20.554025 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:07:20.554873 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:07:20.559599 polkitd[1539]: Started polkitd version 121 Aug 13 01:07:20.563920 polkitd[1539]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:07:20.564055 polkitd[1539]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:07:20.564694 polkitd[1539]: Finished loading, compiling and executing 2 rules Aug 13 01:07:20.565148 dbus-daemon[1448]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:07:20.565269 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:07:20.565472 polkitd[1539]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:07:21.303141 systemd-timesyncd[1395]: Contacted time server 23.155.72.147:123 (0.flatcar.pool.ntp.org). Aug 13 01:07:21.303198 systemd-timesyncd[1395]: Initial clock synchronization to Wed 2025-08-13 01:07:21.303023 UTC. Aug 13 01:07:21.303972 systemd-resolved[1394]: Clock change detected. Flushing caches. Aug 13 01:07:21.306955 systemd-hostnamed[1525]: Hostname set to <172-234-214-191> (transient) Aug 13 01:07:21.307414 systemd-resolved[1394]: System hostname changed to '172-234-214-191'. Aug 13 01:07:21.739111 systemd-networkd[1393]: eth0: Gained IPv6LL Aug 13 01:07:21.743257 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:07:21.744963 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:07:21.752096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:07:21.754665 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:07:21.800542 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:07:21.838405 coreos-metadata[1447]: Aug 13 01:07:21.838 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:07:21.947758 coreos-metadata[1447]: Aug 13 01:07:21.947 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:07:22.071160 coreos-metadata[1512]: Aug 13 01:07:22.071 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:07:22.172951 coreos-metadata[1512]: Aug 13 01:07:22.172 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:07:22.291097 coreos-metadata[1447]: Aug 13 01:07:22.290 INFO Fetch successful Aug 13 01:07:22.291097 coreos-metadata[1447]: Aug 13 01:07:22.291 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:07:22.334066 coreos-metadata[1512]: Aug 13 01:07:22.333 INFO Fetch successful Aug 13 01:07:22.348858 update-ssh-keys[1568]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:07:22.350270 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:07:22.353564 systemd[1]: Finished sshkeys.service. Aug 13 01:07:22.590054 coreos-metadata[1447]: Aug 13 01:07:22.589 INFO Fetch successful Aug 13 01:07:22.657603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:07:22.661653 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:07:22.665575 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:07:22.668093 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:07:22.668485 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:07:22.669635 systemd[1]: Startup finished in 861ms (kernel) + 6.500s (initrd) + 4.780s (userspace) = 12.142s. Aug 13 01:07:23.190694 kubelet[1594]: E0813 01:07:23.190627 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:07:23.194730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:07:23.194939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:07:23.195685 systemd[1]: kubelet.service: Consumed 918ms CPU time, 266.9M memory peak. Aug 13 01:07:26.238498 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:07:26.246257 systemd[1]: Started sshd@0-172.234.214.191:22-139.178.89.65:54136.service - OpenSSH per-connection server daemon (139.178.89.65:54136). Aug 13 01:07:26.577985 sshd[1607]: Accepted publickey for core from 139.178.89.65 port 54136 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:26.579560 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:26.589179 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:07:26.601075 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:07:26.604402 systemd-logind[1456]: New session 1 of user core. Aug 13 01:07:26.613084 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:07:26.619088 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:07:26.624395 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:07:26.626695 systemd-logind[1456]: New session c1 of user core. Aug 13 01:07:26.754846 systemd[1611]: Queued start job for default target default.target. Aug 13 01:07:26.766461 systemd[1611]: Created slice app.slice - User Application Slice. Aug 13 01:07:26.766489 systemd[1611]: Reached target paths.target - Paths. Aug 13 01:07:26.766530 systemd[1611]: Reached target timers.target - Timers. Aug 13 01:07:26.768097 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:07:26.778275 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:07:26.778326 systemd[1611]: Reached target sockets.target - Sockets. Aug 13 01:07:26.778361 systemd[1611]: Reached target basic.target - Basic System. Aug 13 01:07:26.778403 systemd[1611]: Reached target default.target - Main User Target. Aug 13 01:07:26.778431 systemd[1611]: Startup finished in 142ms. Aug 13 01:07:26.778682 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:07:26.789034 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:07:27.042057 systemd[1]: Started sshd@1-172.234.214.191:22-139.178.89.65:54148.service - OpenSSH per-connection server daemon (139.178.89.65:54148). Aug 13 01:07:27.381226 sshd[1622]: Accepted publickey for core from 139.178.89.65 port 54148 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:27.382837 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:27.388030 systemd-logind[1456]: New session 2 of user core. Aug 13 01:07:27.393009 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:07:27.628919 sshd[1624]: Connection closed by 139.178.89.65 port 54148 Aug 13 01:07:27.629390 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:27.632025 systemd[1]: sshd@1-172.234.214.191:22-139.178.89.65:54148.service: Deactivated successfully. Aug 13 01:07:27.633679 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:07:27.634816 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:07:27.635792 systemd-logind[1456]: Removed session 2. Aug 13 01:07:27.685980 systemd[1]: Started sshd@2-172.234.214.191:22-139.178.89.65:54152.service - OpenSSH per-connection server daemon (139.178.89.65:54152). Aug 13 01:07:28.010957 sshd[1630]: Accepted publickey for core from 139.178.89.65 port 54152 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:28.012262 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:28.016324 systemd-logind[1456]: New session 3 of user core. Aug 13 01:07:28.026013 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:07:28.255351 sshd[1632]: Connection closed by 139.178.89.65 port 54152 Aug 13 01:07:28.255839 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:28.259755 systemd[1]: sshd@2-172.234.214.191:22-139.178.89.65:54152.service: Deactivated successfully. Aug 13 01:07:28.261443 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:07:28.263007 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:07:28.263974 systemd-logind[1456]: Removed session 3. Aug 13 01:07:28.322099 systemd[1]: Started sshd@3-172.234.214.191:22-139.178.89.65:54166.service - OpenSSH per-connection server daemon (139.178.89.65:54166). Aug 13 01:07:28.655972 sshd[1638]: Accepted publickey for core from 139.178.89.65 port 54166 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:28.657596 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:28.661929 systemd-logind[1456]: New session 4 of user core. Aug 13 01:07:28.676005 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:07:28.905868 sshd[1640]: Connection closed by 139.178.89.65 port 54166 Aug 13 01:07:28.906533 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:28.909601 systemd[1]: sshd@3-172.234.214.191:22-139.178.89.65:54166.service: Deactivated successfully. Aug 13 01:07:28.911602 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:07:28.912891 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:07:28.913757 systemd-logind[1456]: Removed session 4. Aug 13 01:07:28.962852 systemd[1]: Started sshd@4-172.234.214.191:22-139.178.89.65:33840.service - OpenSSH per-connection server daemon (139.178.89.65:33840). Aug 13 01:07:29.291972 sshd[1646]: Accepted publickey for core from 139.178.89.65 port 33840 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:29.294119 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:29.298601 systemd-logind[1456]: New session 5 of user core. Aug 13 01:07:29.306012 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:07:29.494969 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:07:29.495292 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:07:29.511626 sudo[1649]: pam_unix(sudo:session): session closed for user root Aug 13 01:07:29.561084 sshd[1648]: Connection closed by 139.178.89.65 port 33840 Aug 13 01:07:29.562044 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:29.565273 systemd[1]: sshd@4-172.234.214.191:22-139.178.89.65:33840.service: Deactivated successfully. Aug 13 01:07:29.567185 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:07:29.568416 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:07:29.569439 systemd-logind[1456]: Removed session 5. Aug 13 01:07:29.621039 systemd[1]: Started sshd@5-172.234.214.191:22-139.178.89.65:33842.service - OpenSSH per-connection server daemon (139.178.89.65:33842). Aug 13 01:07:29.952002 sshd[1655]: Accepted publickey for core from 139.178.89.65 port 33842 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:29.953226 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:29.957065 systemd-logind[1456]: New session 6 of user core. Aug 13 01:07:29.968031 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:07:30.148671 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:07:30.149045 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:07:30.152992 sudo[1659]: pam_unix(sudo:session): session closed for user root Aug 13 01:07:30.158529 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:07:30.158832 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:07:30.172284 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:07:30.197604 augenrules[1681]: No rules Aug 13 01:07:30.198782 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:07:30.199040 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:07:30.200142 sudo[1658]: pam_unix(sudo:session): session closed for user root Aug 13 01:07:30.250579 sshd[1657]: Connection closed by 139.178.89.65 port 33842 Aug 13 01:07:30.251595 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:30.254512 systemd[1]: sshd@5-172.234.214.191:22-139.178.89.65:33842.service: Deactivated successfully. Aug 13 01:07:30.256431 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:07:30.257726 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:07:30.258810 systemd-logind[1456]: Removed session 6. Aug 13 01:07:30.316099 systemd[1]: Started sshd@6-172.234.214.191:22-139.178.89.65:33848.service - OpenSSH per-connection server daemon (139.178.89.65:33848). Aug 13 01:07:30.646521 sshd[1690]: Accepted publickey for core from 139.178.89.65 port 33848 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:30.647789 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:30.652461 systemd-logind[1456]: New session 7 of user core. Aug 13 01:07:30.667013 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:07:30.843767 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:07:30.844093 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:07:31.497668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:07:31.497801 systemd[1]: kubelet.service: Consumed 918ms CPU time, 266.9M memory peak. Aug 13 01:07:31.503268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:07:31.534127 systemd[1]: Reload requested from client PID 1728 ('systemctl') (unit session-7.scope)... Aug 13 01:07:31.534142 systemd[1]: Reloading... Aug 13 01:07:31.674348 zram_generator::config[1775]: No configuration found. Aug 13 01:07:31.777141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:07:31.869886 systemd[1]: Reloading finished in 335 ms. Aug 13 01:07:31.918659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:07:31.923046 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:07:31.930157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:07:31.931841 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:07:31.932170 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:07:31.932216 systemd[1]: kubelet.service: Consumed 151ms CPU time, 99.9M memory peak. Aug 13 01:07:31.938343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:07:32.091546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:07:32.096308 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:07:32.134386 kubelet[1833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:07:32.134386 kubelet[1833]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:07:32.134386 kubelet[1833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:07:32.136050 kubelet[1833]: I0813 01:07:32.136009 1833 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:07:32.710118 kubelet[1833]: I0813 01:07:32.710082 1833 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:07:32.710973 kubelet[1833]: I0813 01:07:32.710238 1833 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:07:32.710973 kubelet[1833]: I0813 01:07:32.710758 1833 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:07:32.737019 kubelet[1833]: I0813 01:07:32.736975 1833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:07:32.744379 kubelet[1833]: E0813 01:07:32.744297 1833 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:07:32.744379 kubelet[1833]: I0813 01:07:32.744354 1833 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:07:32.749631 kubelet[1833]: I0813 01:07:32.749606 1833 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:07:32.749961 kubelet[1833]: I0813 01:07:32.749923 1833 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:07:32.750159 kubelet[1833]: I0813 01:07:32.749958 1833 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"192.168.133.100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:07:32.750248 kubelet[1833]: I0813 01:07:32.750158 1833 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:07:32.750248 kubelet[1833]: I0813 01:07:32.750172 1833 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:07:32.750331 kubelet[1833]: I0813 01:07:32.750308 1833 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:07:32.753216 kubelet[1833]: I0813 01:07:32.753194 1833 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:07:32.753216 kubelet[1833]: I0813 01:07:32.753215 1833 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:07:32.753667 kubelet[1833]: I0813 01:07:32.753236 1833 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:07:32.753667 kubelet[1833]: I0813 01:07:32.753252 1833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:07:32.756773 kubelet[1833]: E0813 01:07:32.756498 1833 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:32.756773 kubelet[1833]: E0813 01:07:32.756544 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:32.759182 kubelet[1833]: I0813 01:07:32.758712 1833 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 01:07:32.759320 kubelet[1833]: I0813 01:07:32.759283 1833 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:07:32.760098 kubelet[1833]: W0813 01:07:32.760066 1833 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:07:32.762528 kubelet[1833]: I0813 01:07:32.762502 1833 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:07:32.762569 kubelet[1833]: I0813 01:07:32.762557 1833 server.go:1289] "Started kubelet" Aug 13 01:07:32.763819 kubelet[1833]: I0813 01:07:32.762672 1833 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:07:32.763819 kubelet[1833]: I0813 01:07:32.763422 1833 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:07:32.767273 kubelet[1833]: I0813 01:07:32.766594 1833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:07:32.767273 kubelet[1833]: I0813 01:07:32.766846 1833 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:07:32.769377 kubelet[1833]: I0813 01:07:32.769360 1833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:07:32.772288 kubelet[1833]: I0813 01:07:32.772270 1833 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:07:32.775113 kubelet[1833]: E0813 01:07:32.775094 1833 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:07:32.775227 kubelet[1833]: E0813 01:07:32.775215 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:32.775285 kubelet[1833]: I0813 01:07:32.775275 1833 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:07:32.775504 kubelet[1833]: I0813 01:07:32.775489 1833 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:07:32.776141 kubelet[1833]: I0813 01:07:32.776128 1833 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:07:32.777942 kubelet[1833]: I0813 01:07:32.777442 1833 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:07:32.777942 kubelet[1833]: I0813 01:07:32.777537 1833 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:07:32.780088 kubelet[1833]: I0813 01:07:32.780073 1833 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:07:32.799488 kubelet[1833]: I0813 01:07:32.799465 1833 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:07:32.799488 kubelet[1833]: I0813 01:07:32.799482 1833 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:07:32.799561 kubelet[1833]: I0813 01:07:32.799498 1833 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:07:32.801311 kubelet[1833]: I0813 01:07:32.801285 1833 policy_none.go:49] "None policy: Start" Aug 13 01:07:32.801347 kubelet[1833]: I0813 01:07:32.801317 1833 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:07:32.801347 kubelet[1833]: I0813 01:07:32.801330 1833 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:07:32.808707 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:07:32.822701 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:07:32.828164 kubelet[1833]: E0813 01:07:32.828065 1833 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 01:07:32.828219 kubelet[1833]: E0813 01:07:32.828176 1833 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"192.168.133.100\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Aug 13 01:07:32.828242 kubelet[1833]: E0813 01:07:32.828219 1833 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 01:07:32.828273 kubelet[1833]: E0813 01:07:32.828244 1833 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"192.168.133.100\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 01:07:32.829598 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:07:32.832948 kubelet[1833]: E0813 01:07:32.831671 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{192.168.133.100.185b2e2d80ef809e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:192.168.133.100,UID:192.168.133.100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:192.168.133.100,},FirstTimestamp:2025-08-13 01:07:32.762525854 +0000 UTC m=+0.661644602,LastTimestamp:2025-08-13 01:07:32.762525854 +0000 UTC m=+0.661644602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:192.168.133.100,}" Aug 13 01:07:32.839000 kubelet[1833]: E0813 01:07:32.838964 1833 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:07:32.839152 kubelet[1833]: I0813 01:07:32.839127 1833 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:07:32.839185 kubelet[1833]: I0813 01:07:32.839147 1833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:07:32.839650 kubelet[1833]: I0813 01:07:32.839629 1833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:07:32.841038 kubelet[1833]: E0813 01:07:32.840716 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{192.168.133.100.185b2e2d81af1824 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:192.168.133.100,UID:192.168.133.100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:192.168.133.100,},FirstTimestamp:2025-08-13 01:07:32.77508202 +0000 UTC m=+0.674200768,LastTimestamp:2025-08-13 01:07:32.77508202 +0000 UTC m=+0.674200768,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:192.168.133.100,}" Aug 13 01:07:32.844225 kubelet[1833]: E0813 01:07:32.844031 1833 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:07:32.844225 kubelet[1833]: E0813 01:07:32.844066 1833 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"192.168.133.100\" not found" Aug 13 01:07:32.848530 kubelet[1833]: I0813 01:07:32.848493 1833 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:07:32.850085 kubelet[1833]: I0813 01:07:32.849783 1833 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:07:32.850085 kubelet[1833]: I0813 01:07:32.849811 1833 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:07:32.850085 kubelet[1833]: I0813 01:07:32.849827 1833 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:07:32.850085 kubelet[1833]: I0813 01:07:32.849833 1833 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:07:32.850085 kubelet[1833]: E0813 01:07:32.849920 1833 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 01:07:32.865344 kubelet[1833]: E0813 01:07:32.865261 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{192.168.133.100.185b2e2d83075e67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:192.168.133.100,UID:192.168.133.100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 192.168.133.100 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:192.168.133.100,},FirstTimestamp:2025-08-13 01:07:32.797644391 +0000 UTC m=+0.696763139,LastTimestamp:2025-08-13 01:07:32.797644391 +0000 UTC m=+0.696763139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:192.168.133.100,}" Aug 13 01:07:32.903775 kubelet[1833]: E0813 01:07:32.903686 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{192.168.133.100.185b2e2d830779bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:192.168.133.100,UID:192.168.133.100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 192.168.133.100 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:192.168.133.100,},FirstTimestamp:2025-08-13 01:07:32.797651391 +0000 UTC m=+0.696770139,LastTimestamp:2025-08-13 01:07:32.797651391 +0000 UTC m=+0.696770139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:192.168.133.100,}" Aug 13 01:07:32.904046 kubelet[1833]: E0813 01:07:32.903750 1833 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 01:07:32.940035 kubelet[1833]: I0813 01:07:32.939989 1833 kubelet_node_status.go:75] "Attempting to register node" node="192.168.133.100" Aug 13 01:07:32.998497 kubelet[1833]: I0813 01:07:32.998401 1833 kubelet_node_status.go:78] "Successfully registered node" node="192.168.133.100" Aug 13 01:07:32.998497 kubelet[1833]: E0813 01:07:32.998442 1833 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"192.168.133.100\": node \"192.168.133.100\" not found" Aug 13 01:07:33.174838 kubelet[1833]: E0813 01:07:33.174791 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:33.276039 kubelet[1833]: E0813 01:07:33.275892 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:33.302557 sudo[1693]: pam_unix(sudo:session): session closed for user root Aug 13 01:07:33.353388 sshd[1692]: Connection closed by 139.178.89.65 port 33848 Aug 13 01:07:33.353865 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:33.357452 systemd[1]: sshd@6-172.234.214.191:22-139.178.89.65:33848.service: Deactivated successfully. Aug 13 01:07:33.360868 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:07:33.361265 systemd[1]: session-7.scope: Consumed 449ms CPU time, 75.5M memory peak. Aug 13 01:07:33.363545 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:07:33.364939 systemd-logind[1456]: Removed session 7. Aug 13 01:07:33.376359 kubelet[1833]: E0813 01:07:33.376336 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:33.476522 kubelet[1833]: E0813 01:07:33.476481 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:33.577418 kubelet[1833]: E0813 01:07:33.577278 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:33.677840 kubelet[1833]: E0813 01:07:33.677804 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:33.714150 kubelet[1833]: I0813 01:07:33.714129 1833 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 01:07:33.756647 kubelet[1833]: E0813 01:07:33.756604 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:33.778508 kubelet[1833]: E0813 01:07:33.778484 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:33.879149 kubelet[1833]: E0813 01:07:33.878984 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:33.979701 kubelet[1833]: E0813 01:07:33.979676 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:34.080496 kubelet[1833]: E0813 01:07:34.080439 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:34.180687 kubelet[1833]: E0813 01:07:34.180557 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:34.281405 kubelet[1833]: E0813 01:07:34.281345 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.133.100\" not found" Aug 13 01:07:34.383059 kubelet[1833]: I0813 01:07:34.383029 1833 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Aug 13 01:07:34.383503 containerd[1478]: time="2025-08-13T01:07:34.383461743Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:07:34.383995 kubelet[1833]: I0813 01:07:34.383924 1833 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Aug 13 01:07:34.757608 kubelet[1833]: E0813 01:07:34.757559 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:34.758658 kubelet[1833]: I0813 01:07:34.758635 1833 apiserver.go:52] "Watching apiserver" Aug 13 01:07:34.773648 kubelet[1833]: E0813 01:07:34.773198 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:34.776135 kubelet[1833]: I0813 01:07:34.776115 1833 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:07:34.782149 systemd[1]: Created slice kubepods-besteffort-pod7f26481a_205b_42bf_bb1f_48df3d99d8eb.slice - libcontainer container kubepods-besteffort-pod7f26481a_205b_42bf_bb1f_48df3d99d8eb.slice. Aug 13 01:07:34.789243 kubelet[1833]: I0813 01:07:34.789213 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f5ba40c2-4d45-4179-8c4f-7fe837c00595-registration-dir\") pod \"csi-node-driver-bc77x\" (UID: \"f5ba40c2-4d45-4179-8c4f-7fe837c00595\") " pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:34.789292 kubelet[1833]: I0813 01:07:34.789244 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f5ba40c2-4d45-4179-8c4f-7fe837c00595-socket-dir\") pod \"csi-node-driver-bc77x\" (UID: \"f5ba40c2-4d45-4179-8c4f-7fe837c00595\") " pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:34.789292 kubelet[1833]: I0813 01:07:34.789261 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4fc5\" (UniqueName: \"kubernetes.io/projected/b8a54bf6-e5dd-4d55-bab9-85c30b1288f3-kube-api-access-d4fc5\") pod \"kube-proxy-qgfjh\" (UID: \"b8a54bf6-e5dd-4d55-bab9-85c30b1288f3\") " pod="kube-system/kube-proxy-qgfjh" Aug 13 01:07:34.789333 kubelet[1833]: I0813 01:07:34.789294 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8074bc93-b91c-448d-80a1-893c9f8548f6-node-certs\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789333 kubelet[1833]: I0813 01:07:34.789309 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-var-run-calico\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789333 kubelet[1833]: I0813 01:07:34.789328 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f5ba40c2-4d45-4179-8c4f-7fe837c00595-varrun\") pod \"csi-node-driver-bc77x\" (UID: \"f5ba40c2-4d45-4179-8c4f-7fe837c00595\") " pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:34.789396 kubelet[1833]: I0813 01:07:34.789341 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8a54bf6-e5dd-4d55-bab9-85c30b1288f3-kube-proxy\") pod \"kube-proxy-qgfjh\" (UID: \"b8a54bf6-e5dd-4d55-bab9-85c30b1288f3\") " pod="kube-system/kube-proxy-qgfjh" Aug 13 01:07:34.789396 kubelet[1833]: I0813 01:07:34.789365 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f26481a-205b-42bf-bb1f-48df3d99d8eb-var-lib-calico\") pod \"tigera-operator-747864d56d-wmzmk\" (UID: \"7f26481a-205b-42bf-bb1f-48df3d99d8eb\") " pod="tigera-operator/tigera-operator-747864d56d-wmzmk" Aug 13 01:07:34.789396 kubelet[1833]: I0813 01:07:34.789381 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gssk5\" (UniqueName: \"kubernetes.io/projected/7f26481a-205b-42bf-bb1f-48df3d99d8eb-kube-api-access-gssk5\") pod \"tigera-operator-747864d56d-wmzmk\" (UID: \"7f26481a-205b-42bf-bb1f-48df3d99d8eb\") " pod="tigera-operator/tigera-operator-747864d56d-wmzmk" Aug 13 01:07:34.789396 kubelet[1833]: I0813 01:07:34.789395 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-cni-bin-dir\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789473 kubelet[1833]: I0813 01:07:34.789408 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-cni-net-dir\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789473 kubelet[1833]: I0813 01:07:34.789424 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-flexvol-driver-host\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789473 kubelet[1833]: I0813 01:07:34.789437 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-policysync\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789473 kubelet[1833]: I0813 01:07:34.789451 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8074bc93-b91c-448d-80a1-893c9f8548f6-tigera-ca-bundle\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789473 kubelet[1833]: I0813 01:07:34.789464 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-var-lib-calico\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789563 kubelet[1833]: I0813 01:07:34.789478 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-cni-log-dir\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789563 kubelet[1833]: I0813 01:07:34.789492 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-lib-modules\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789563 kubelet[1833]: I0813 01:07:34.789507 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxv55\" (UniqueName: \"kubernetes.io/projected/8074bc93-b91c-448d-80a1-893c9f8548f6-kube-api-access-sxv55\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789563 kubelet[1833]: I0813 01:07:34.789520 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnw2\" (UniqueName: \"kubernetes.io/projected/f5ba40c2-4d45-4179-8c4f-7fe837c00595-kube-api-access-wtnw2\") pod \"csi-node-driver-bc77x\" (UID: \"f5ba40c2-4d45-4179-8c4f-7fe837c00595\") " pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:34.789563 kubelet[1833]: I0813 01:07:34.789532 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8a54bf6-e5dd-4d55-bab9-85c30b1288f3-xtables-lock\") pod \"kube-proxy-qgfjh\" (UID: \"b8a54bf6-e5dd-4d55-bab9-85c30b1288f3\") " pod="kube-system/kube-proxy-qgfjh" Aug 13 01:07:34.789726 kubelet[1833]: I0813 01:07:34.789545 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8a54bf6-e5dd-4d55-bab9-85c30b1288f3-lib-modules\") pod \"kube-proxy-qgfjh\" (UID: \"b8a54bf6-e5dd-4d55-bab9-85c30b1288f3\") " pod="kube-system/kube-proxy-qgfjh" Aug 13 01:07:34.789726 kubelet[1833]: I0813 01:07:34.789557 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8074bc93-b91c-448d-80a1-893c9f8548f6-xtables-lock\") pod \"calico-node-zkd5h\" (UID: \"8074bc93-b91c-448d-80a1-893c9f8548f6\") " pod="calico-system/calico-node-zkd5h" Aug 13 01:07:34.789726 kubelet[1833]: I0813 01:07:34.789574 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5ba40c2-4d45-4179-8c4f-7fe837c00595-kubelet-dir\") pod \"csi-node-driver-bc77x\" (UID: \"f5ba40c2-4d45-4179-8c4f-7fe837c00595\") " pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:34.792640 systemd[1]: Created slice kubepods-besteffort-podb8a54bf6_e5dd_4d55_bab9_85c30b1288f3.slice - libcontainer container kubepods-besteffort-podb8a54bf6_e5dd_4d55_bab9_85c30b1288f3.slice. Aug 13 01:07:34.806618 systemd[1]: Created slice kubepods-besteffort-pod8074bc93_b91c_448d_80a1_893c9f8548f6.slice - libcontainer container kubepods-besteffort-pod8074bc93_b91c_448d_80a1_893c9f8548f6.slice. Aug 13 01:07:34.896929 kubelet[1833]: E0813 01:07:34.893950 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.896929 kubelet[1833]: W0813 01:07:34.893969 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.896929 kubelet[1833]: E0813 01:07:34.894092 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.896929 kubelet[1833]: E0813 01:07:34.895014 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.896929 kubelet[1833]: W0813 01:07:34.895023 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.896929 kubelet[1833]: E0813 01:07:34.895033 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.896929 kubelet[1833]: E0813 01:07:34.895449 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.896929 kubelet[1833]: W0813 01:07:34.895457 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.896929 kubelet[1833]: E0813 01:07:34.895465 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.896929 kubelet[1833]: E0813 01:07:34.895852 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.897183 kubelet[1833]: W0813 01:07:34.895860 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.897183 kubelet[1833]: E0813 01:07:34.895868 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.897753 kubelet[1833]: E0813 01:07:34.897732 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.897753 kubelet[1833]: W0813 01:07:34.897747 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.897816 kubelet[1833]: E0813 01:07:34.897761 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.898939 kubelet[1833]: E0813 01:07:34.898541 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.898939 kubelet[1833]: W0813 01:07:34.898666 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.898939 kubelet[1833]: E0813 01:07:34.898677 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.900844 kubelet[1833]: E0813 01:07:34.899804 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.900844 kubelet[1833]: W0813 01:07:34.899817 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.900844 kubelet[1833]: E0813 01:07:34.899828 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.901074 kubelet[1833]: E0813 01:07:34.901061 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.901123 kubelet[1833]: W0813 01:07:34.901112 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.901166 kubelet[1833]: E0813 01:07:34.901155 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.901366 kubelet[1833]: E0813 01:07:34.901354 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.901415 kubelet[1833]: W0813 01:07:34.901404 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.901470 kubelet[1833]: E0813 01:07:34.901458 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.901665 kubelet[1833]: E0813 01:07:34.901654 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.901712 kubelet[1833]: W0813 01:07:34.901702 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.901751 kubelet[1833]: E0813 01:07:34.901742 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.902036 kubelet[1833]: E0813 01:07:34.902025 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.902085 kubelet[1833]: W0813 01:07:34.902075 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.902142 kubelet[1833]: E0813 01:07:34.902131 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.902346 kubelet[1833]: E0813 01:07:34.902335 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.902400 kubelet[1833]: W0813 01:07:34.902390 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.902440 kubelet[1833]: E0813 01:07:34.902431 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.902629 kubelet[1833]: E0813 01:07:34.902618 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.902676 kubelet[1833]: W0813 01:07:34.902666 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.902728 kubelet[1833]: E0813 01:07:34.902717 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.902952 kubelet[1833]: E0813 01:07:34.902940 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.903004 kubelet[1833]: W0813 01:07:34.902991 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.903044 kubelet[1833]: E0813 01:07:34.903034 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.903694 kubelet[1833]: E0813 01:07:34.903675 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.903694 kubelet[1833]: W0813 01:07:34.903690 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.903748 kubelet[1833]: E0813 01:07:34.903700 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.903982 kubelet[1833]: E0813 01:07:34.903964 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.903982 kubelet[1833]: W0813 01:07:34.903978 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.903982 kubelet[1833]: E0813 01:07:34.903987 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.904167 kubelet[1833]: E0813 01:07:34.904149 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.904167 kubelet[1833]: W0813 01:07:34.904163 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.904229 kubelet[1833]: E0813 01:07:34.904171 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.905096 kubelet[1833]: E0813 01:07:34.905084 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.905171 kubelet[1833]: W0813 01:07:34.905160 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.905238 kubelet[1833]: E0813 01:07:34.905226 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.905478 kubelet[1833]: E0813 01:07:34.905467 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.905547 kubelet[1833]: W0813 01:07:34.905536 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.905589 kubelet[1833]: E0813 01:07:34.905579 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.905822 kubelet[1833]: E0813 01:07:34.905811 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.906001 kubelet[1833]: W0813 01:07:34.905931 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.906001 kubelet[1833]: E0813 01:07:34.905945 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.906314 kubelet[1833]: E0813 01:07:34.906216 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.906314 kubelet[1833]: W0813 01:07:34.906239 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.906314 kubelet[1833]: E0813 01:07:34.906248 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.906516 kubelet[1833]: E0813 01:07:34.906505 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.906590 kubelet[1833]: W0813 01:07:34.906579 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.906729 kubelet[1833]: E0813 01:07:34.906631 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.906996 kubelet[1833]: E0813 01:07:34.906982 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.907054 kubelet[1833]: W0813 01:07:34.907043 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.907113 kubelet[1833]: E0813 01:07:34.907085 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.907481 kubelet[1833]: E0813 01:07:34.907335 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.907481 kubelet[1833]: W0813 01:07:34.907345 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.907481 kubelet[1833]: E0813 01:07:34.907354 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.907704 kubelet[1833]: E0813 01:07:34.907693 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.907775 kubelet[1833]: W0813 01:07:34.907763 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.907829 kubelet[1833]: E0813 01:07:34.907807 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.908144 kubelet[1833]: E0813 01:07:34.908132 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.908265 kubelet[1833]: W0813 01:07:34.908213 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.908265 kubelet[1833]: E0813 01:07:34.908226 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.908743 kubelet[1833]: E0813 01:07:34.908686 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.908743 kubelet[1833]: W0813 01:07:34.908698 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.908743 kubelet[1833]: E0813 01:07:34.908706 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.909221 kubelet[1833]: E0813 01:07:34.909106 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.909221 kubelet[1833]: W0813 01:07:34.909117 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.909221 kubelet[1833]: E0813 01:07:34.909132 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.909459 kubelet[1833]: E0813 01:07:34.909446 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.909528 kubelet[1833]: W0813 01:07:34.909517 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.909594 kubelet[1833]: E0813 01:07:34.909562 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.909932 kubelet[1833]: E0813 01:07:34.909890 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.910060 kubelet[1833]: W0813 01:07:34.910048 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.910232 kubelet[1833]: E0813 01:07:34.910117 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.910411 kubelet[1833]: E0813 01:07:34.910399 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.911388 kubelet[1833]: W0813 01:07:34.911370 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.912089 kubelet[1833]: E0813 01:07:34.911866 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.912410 kubelet[1833]: E0813 01:07:34.912360 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.912410 kubelet[1833]: W0813 01:07:34.912372 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.912410 kubelet[1833]: E0813 01:07:34.912381 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.914328 kubelet[1833]: E0813 01:07:34.914046 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.914328 kubelet[1833]: W0813 01:07:34.914059 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.914328 kubelet[1833]: E0813 01:07:34.914069 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.914977 kubelet[1833]: E0813 01:07:34.914962 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.915067 kubelet[1833]: W0813 01:07:34.915054 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.915129 kubelet[1833]: E0813 01:07:34.915105 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.915840 kubelet[1833]: E0813 01:07:34.915827 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.915945 kubelet[1833]: W0813 01:07:34.915933 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.916048 kubelet[1833]: E0813 01:07:34.916011 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.918047 kubelet[1833]: E0813 01:07:34.918010 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.918047 kubelet[1833]: W0813 01:07:34.918024 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.918194 kubelet[1833]: E0813 01:07:34.918126 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.918431 kubelet[1833]: E0813 01:07:34.918385 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.918431 kubelet[1833]: W0813 01:07:34.918395 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.918431 kubelet[1833]: E0813 01:07:34.918403 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:34.924917 kubelet[1833]: E0813 01:07:34.922568 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:07:34.924917 kubelet[1833]: W0813 01:07:34.922585 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:07:34.924917 kubelet[1833]: E0813 01:07:34.922595 1833 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:07:35.089752 containerd[1478]: time="2025-08-13T01:07:35.089645556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-wmzmk,Uid:7f26481a-205b-42bf-bb1f-48df3d99d8eb,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:07:35.096356 kubelet[1833]: E0813 01:07:35.096282 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:35.097170 containerd[1478]: time="2025-08-13T01:07:35.096918970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgfjh,Uid:b8a54bf6-e5dd-4d55-bab9-85c30b1288f3,Namespace:kube-system,Attempt:0,}" Aug 13 01:07:35.109183 containerd[1478]: time="2025-08-13T01:07:35.109152346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkd5h,Uid:8074bc93-b91c-448d-80a1-893c9f8548f6,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:35.758522 kubelet[1833]: E0813 01:07:35.758475 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:35.909285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758843243.mount: Deactivated successfully. Aug 13 01:07:35.912478 containerd[1478]: time="2025-08-13T01:07:35.912414367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:07:35.913924 containerd[1478]: time="2025-08-13T01:07:35.913697758Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:07:35.916934 containerd[1478]: time="2025-08-13T01:07:35.916853430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 01:07:35.918700 containerd[1478]: time="2025-08-13T01:07:35.918341460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 01:07:35.919781 containerd[1478]: time="2025-08-13T01:07:35.919630761Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:07:35.922956 containerd[1478]: time="2025-08-13T01:07:35.922497952Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:07:35.922956 containerd[1478]: time="2025-08-13T01:07:35.922913263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 01:07:35.923981 containerd[1478]: time="2025-08-13T01:07:35.923960223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:07:35.926596 containerd[1478]: time="2025-08-13T01:07:35.926573304Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 829.580234ms" Aug 13 01:07:35.928616 containerd[1478]: time="2025-08-13T01:07:35.928579925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 819.361449ms" Aug 13 01:07:35.929282 containerd[1478]: time="2025-08-13T01:07:35.929252416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 839.49288ms" Aug 13 01:07:36.027974 containerd[1478]: time="2025-08-13T01:07:36.024999404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:36.027974 containerd[1478]: time="2025-08-13T01:07:36.027472765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:36.027974 containerd[1478]: time="2025-08-13T01:07:36.027488925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:36.028745 containerd[1478]: time="2025-08-13T01:07:36.028633825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:36.029884 containerd[1478]: time="2025-08-13T01:07:36.029819736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:36.029884 containerd[1478]: time="2025-08-13T01:07:36.029862416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:36.029982 containerd[1478]: time="2025-08-13T01:07:36.029875066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:36.029982 containerd[1478]: time="2025-08-13T01:07:36.029954196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:36.031129 containerd[1478]: time="2025-08-13T01:07:36.031065397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:36.031203 containerd[1478]: time="2025-08-13T01:07:36.031121577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:36.031203 containerd[1478]: time="2025-08-13T01:07:36.031135937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:36.031267 containerd[1478]: time="2025-08-13T01:07:36.031195647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:36.102508 systemd[1]: Started cri-containerd-6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f.scope - libcontainer container 6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f. Aug 13 01:07:36.109975 systemd[1]: Started cri-containerd-3e39e0fe6f04f4a469b1521dc7edce2799477db4a390c774b49606bdd77ecb49.scope - libcontainer container 3e39e0fe6f04f4a469b1521dc7edce2799477db4a390c774b49606bdd77ecb49. Aug 13 01:07:36.114482 systemd[1]: Started cri-containerd-f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51.scope - libcontainer container f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51. Aug 13 01:07:36.151518 containerd[1478]: time="2025-08-13T01:07:36.151425097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgfjh,Uid:b8a54bf6-e5dd-4d55-bab9-85c30b1288f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e39e0fe6f04f4a469b1521dc7edce2799477db4a390c774b49606bdd77ecb49\"" Aug 13 01:07:36.154798 kubelet[1833]: E0813 01:07:36.153783 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:36.157138 containerd[1478]: time="2025-08-13T01:07:36.157110360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 01:07:36.167212 containerd[1478]: time="2025-08-13T01:07:36.167191085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkd5h,Uid:8074bc93-b91c-448d-80a1-893c9f8548f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51\"" Aug 13 01:07:36.171602 containerd[1478]: time="2025-08-13T01:07:36.171378177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-wmzmk,Uid:7f26481a-205b-42bf-bb1f-48df3d99d8eb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\"" Aug 13 01:07:36.758926 kubelet[1833]: E0813 01:07:36.758867 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:36.851451 kubelet[1833]: E0813 01:07:36.851143 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:37.499642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815890215.mount: Deactivated successfully. Aug 13 01:07:37.759618 kubelet[1833]: E0813 01:07:37.759402 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:37.891011 containerd[1478]: time="2025-08-13T01:07:37.890969216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:37.892076 containerd[1478]: time="2025-08-13T01:07:37.891920096Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 01:07:37.892935 containerd[1478]: time="2025-08-13T01:07:37.892644217Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:37.894547 containerd[1478]: time="2025-08-13T01:07:37.894515298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:37.895358 containerd[1478]: time="2025-08-13T01:07:37.895328078Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 1.738188438s" Aug 13 01:07:37.895611 containerd[1478]: time="2025-08-13T01:07:37.895597408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 01:07:37.896928 containerd[1478]: time="2025-08-13T01:07:37.896873939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:07:37.899231 containerd[1478]: time="2025-08-13T01:07:37.899199140Z" level=info msg="CreateContainer within sandbox \"3e39e0fe6f04f4a469b1521dc7edce2799477db4a390c774b49606bdd77ecb49\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:07:37.914099 containerd[1478]: time="2025-08-13T01:07:37.914055267Z" level=info msg="CreateContainer within sandbox \"3e39e0fe6f04f4a469b1521dc7edce2799477db4a390c774b49606bdd77ecb49\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6cc4ac5e2fe5f7baff4e4d37bb506604d09eb980ff939f2dd0786c93ebbb4fdf\"" Aug 13 01:07:37.914624 containerd[1478]: time="2025-08-13T01:07:37.914601388Z" level=info msg="StartContainer for \"6cc4ac5e2fe5f7baff4e4d37bb506604d09eb980ff939f2dd0786c93ebbb4fdf\"" Aug 13 01:07:37.945053 systemd[1]: Started cri-containerd-6cc4ac5e2fe5f7baff4e4d37bb506604d09eb980ff939f2dd0786c93ebbb4fdf.scope - libcontainer container 6cc4ac5e2fe5f7baff4e4d37bb506604d09eb980ff939f2dd0786c93ebbb4fdf. Aug 13 01:07:37.978684 containerd[1478]: time="2025-08-13T01:07:37.978563280Z" level=info msg="StartContainer for \"6cc4ac5e2fe5f7baff4e4d37bb506604d09eb980ff939f2dd0786c93ebbb4fdf\" returns successfully" Aug 13 01:07:38.566080 containerd[1478]: time="2025-08-13T01:07:38.566028543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:38.566818 containerd[1478]: time="2025-08-13T01:07:38.566781774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Aug 13 01:07:38.567278 containerd[1478]: time="2025-08-13T01:07:38.567236174Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:38.570140 containerd[1478]: time="2025-08-13T01:07:38.569558725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:38.570140 containerd[1478]: time="2025-08-13T01:07:38.570036885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 673.098616ms" Aug 13 01:07:38.570140 containerd[1478]: time="2025-08-13T01:07:38.570061005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:07:38.571653 containerd[1478]: time="2025-08-13T01:07:38.571624786Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:07:38.573513 containerd[1478]: time="2025-08-13T01:07:38.573484527Z" level=info msg="CreateContainer within sandbox \"f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:07:38.589170 containerd[1478]: time="2025-08-13T01:07:38.589134005Z" level=info msg="CreateContainer within sandbox \"f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"26f398f8d1efeeab1130aaef235439dd61e57cf5e98e5a8e95fd8f84e93626ad\"" Aug 13 01:07:38.589517 containerd[1478]: time="2025-08-13T01:07:38.589407685Z" level=info msg="StartContainer for \"26f398f8d1efeeab1130aaef235439dd61e57cf5e98e5a8e95fd8f84e93626ad\"" Aug 13 01:07:38.620035 systemd[1]: Started cri-containerd-26f398f8d1efeeab1130aaef235439dd61e57cf5e98e5a8e95fd8f84e93626ad.scope - libcontainer container 26f398f8d1efeeab1130aaef235439dd61e57cf5e98e5a8e95fd8f84e93626ad. Aug 13 01:07:38.649576 containerd[1478]: time="2025-08-13T01:07:38.649538975Z" level=info msg="StartContainer for \"26f398f8d1efeeab1130aaef235439dd61e57cf5e98e5a8e95fd8f84e93626ad\" returns successfully" Aug 13 01:07:38.659627 systemd[1]: cri-containerd-26f398f8d1efeeab1130aaef235439dd61e57cf5e98e5a8e95fd8f84e93626ad.scope: Deactivated successfully. Aug 13 01:07:38.700080 containerd[1478]: time="2025-08-13T01:07:38.700017140Z" level=info msg="shim disconnected" id=26f398f8d1efeeab1130aaef235439dd61e57cf5e98e5a8e95fd8f84e93626ad namespace=k8s.io Aug 13 01:07:38.700080 containerd[1478]: time="2025-08-13T01:07:38.700071500Z" level=warning msg="cleaning up after shim disconnected" id=26f398f8d1efeeab1130aaef235439dd61e57cf5e98e5a8e95fd8f84e93626ad namespace=k8s.io Aug 13 01:07:38.700080 containerd[1478]: time="2025-08-13T01:07:38.700079970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:07:38.760237 kubelet[1833]: E0813 01:07:38.760203 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:38.850534 kubelet[1833]: E0813 01:07:38.850154 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:38.865642 kubelet[1833]: E0813 01:07:38.865592 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:38.907184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195761301.mount: Deactivated successfully. Aug 13 01:07:39.569064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2192041468.mount: Deactivated successfully. Aug 13 01:07:39.761034 kubelet[1833]: E0813 01:07:39.760992 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:39.868604 kubelet[1833]: E0813 01:07:39.867796 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:40.081028 containerd[1478]: time="2025-08-13T01:07:40.080965700Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:40.081779 containerd[1478]: time="2025-08-13T01:07:40.081741961Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:07:40.083289 containerd[1478]: time="2025-08-13T01:07:40.082165121Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:40.084778 containerd[1478]: time="2025-08-13T01:07:40.084004532Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:40.084778 containerd[1478]: time="2025-08-13T01:07:40.084661052Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.513007136s" Aug 13 01:07:40.084778 containerd[1478]: time="2025-08-13T01:07:40.084691612Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:07:40.086653 containerd[1478]: time="2025-08-13T01:07:40.086634993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:07:40.088510 containerd[1478]: time="2025-08-13T01:07:40.088478684Z" level=info msg="CreateContainer within sandbox \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:07:40.101964 containerd[1478]: time="2025-08-13T01:07:40.101934801Z" level=info msg="CreateContainer within sandbox \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\"" Aug 13 01:07:40.102929 containerd[1478]: time="2025-08-13T01:07:40.102370041Z" level=info msg="StartContainer for \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\"" Aug 13 01:07:40.132026 systemd[1]: Started cri-containerd-df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3.scope - libcontainer container df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3. Aug 13 01:07:40.156800 containerd[1478]: time="2025-08-13T01:07:40.156770358Z" level=info msg="StartContainer for \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\" returns successfully" Aug 13 01:07:40.762085 kubelet[1833]: E0813 01:07:40.762059 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:40.850853 kubelet[1833]: E0813 01:07:40.850564 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:40.881068 kubelet[1833]: I0813 01:07:40.881016 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qgfjh" podStartSLOduration=6.1400358 podStartE2EDuration="7.88100127s" podCreationTimestamp="2025-08-13 01:07:33 +0000 UTC" firstStartedPulling="2025-08-13 01:07:36.155618529 +0000 UTC m=+4.054737277" lastFinishedPulling="2025-08-13 01:07:37.896583999 +0000 UTC m=+5.795702747" observedRunningTime="2025-08-13 01:07:38.892688526 +0000 UTC m=+6.791807274" watchObservedRunningTime="2025-08-13 01:07:40.88100127 +0000 UTC m=+8.780120028" Aug 13 01:07:41.762889 kubelet[1833]: E0813 01:07:41.762736 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:42.183568 containerd[1478]: time="2025-08-13T01:07:42.182831150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:42.183568 containerd[1478]: time="2025-08-13T01:07:42.183455421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:07:42.184218 containerd[1478]: time="2025-08-13T01:07:42.184178981Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:42.186127 containerd[1478]: time="2025-08-13T01:07:42.185826952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:42.186528 containerd[1478]: time="2025-08-13T01:07:42.186497032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.099746249s" Aug 13 01:07:42.186528 containerd[1478]: time="2025-08-13T01:07:42.186526192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:07:42.189975 containerd[1478]: time="2025-08-13T01:07:42.189949544Z" level=info msg="CreateContainer within sandbox \"f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:07:42.205792 containerd[1478]: time="2025-08-13T01:07:42.205758322Z" level=info msg="CreateContainer within sandbox \"f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f\"" Aug 13 01:07:42.206369 containerd[1478]: time="2025-08-13T01:07:42.206347182Z" level=info msg="StartContainer for \"928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f\"" Aug 13 01:07:42.237035 systemd[1]: Started cri-containerd-928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f.scope - libcontainer container 928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f. Aug 13 01:07:42.265523 containerd[1478]: time="2025-08-13T01:07:42.265498482Z" level=info msg="StartContainer for \"928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f\" returns successfully" Aug 13 01:07:42.729446 containerd[1478]: time="2025-08-13T01:07:42.729393563Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:07:42.732271 systemd[1]: cri-containerd-928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f.scope: Deactivated successfully. Aug 13 01:07:42.732592 systemd[1]: cri-containerd-928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f.scope: Consumed 545ms CPU time, 191.2M memory peak, 171.2M written to disk. Aug 13 01:07:42.752634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f-rootfs.mount: Deactivated successfully. Aug 13 01:07:42.764632 kubelet[1833]: E0813 01:07:42.764593 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:42.791868 kubelet[1833]: I0813 01:07:42.790693 1833 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:07:42.816892 kubelet[1833]: I0813 01:07:42.812651 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-wmzmk" podStartSLOduration=5.899621359 podStartE2EDuration="9.812634775s" podCreationTimestamp="2025-08-13 01:07:33 +0000 UTC" firstStartedPulling="2025-08-13 01:07:36.172724907 +0000 UTC m=+4.071843665" lastFinishedPulling="2025-08-13 01:07:40.085738333 +0000 UTC m=+7.984857081" observedRunningTime="2025-08-13 01:07:40.88120874 +0000 UTC m=+8.780327488" watchObservedRunningTime="2025-08-13 01:07:42.812634775 +0000 UTC m=+10.711753523" Aug 13 01:07:42.827169 containerd[1478]: time="2025-08-13T01:07:42.826943502Z" level=info msg="shim disconnected" id=928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f namespace=k8s.io Aug 13 01:07:42.827169 containerd[1478]: time="2025-08-13T01:07:42.826989582Z" level=warning msg="cleaning up after shim disconnected" id=928a2b8cab77724fa0c7f19360811f93b30a354dc0de1f0f08646cd312bfd76f namespace=k8s.io Aug 13 01:07:42.827169 containerd[1478]: time="2025-08-13T01:07:42.826997512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:07:42.834513 systemd[1]: Created slice kubepods-besteffort-pod5bf0817c_2d50_4b1c_b228_be18c278aa6e.slice - libcontainer container kubepods-besteffort-pod5bf0817c_2d50_4b1c_b228_be18c278aa6e.slice. Aug 13 01:07:42.843508 systemd[1]: Created slice kubepods-besteffort-pod6b7cf428_2808_4afb_aea8_f874628caa6c.slice - libcontainer container kubepods-besteffort-pod6b7cf428_2808_4afb_aea8_f874628caa6c.slice. Aug 13 01:07:42.846258 kubelet[1833]: I0813 01:07:42.843713 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c98841f2-352f-43ac-b754-01bf12142833-config\") pod \"goldmane-768f4c5c69-x4s2d\" (UID: \"c98841f2-352f-43ac-b754-01bf12142833\") " pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:42.846258 kubelet[1833]: I0813 01:07:42.843742 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c98841f2-352f-43ac-b754-01bf12142833-goldmane-key-pair\") pod \"goldmane-768f4c5c69-x4s2d\" (UID: \"c98841f2-352f-43ac-b754-01bf12142833\") " pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:42.846258 kubelet[1833]: I0813 01:07:42.843760 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c98841f2-352f-43ac-b754-01bf12142833-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-x4s2d\" (UID: \"c98841f2-352f-43ac-b754-01bf12142833\") " pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:42.846258 kubelet[1833]: I0813 01:07:42.843777 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkrpd\" (UniqueName: \"kubernetes.io/projected/5bf0817c-2d50-4b1c-b228-be18c278aa6e-kube-api-access-zkrpd\") pod \"calico-apiserver-67866967cc-xcncr\" (UID: \"5bf0817c-2d50-4b1c-b228-be18c278aa6e\") " pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:42.846258 kubelet[1833]: I0813 01:07:42.843796 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5bf0817c-2d50-4b1c-b228-be18c278aa6e-calico-apiserver-certs\") pod \"calico-apiserver-67866967cc-xcncr\" (UID: \"5bf0817c-2d50-4b1c-b228-be18c278aa6e\") " pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:42.846386 kubelet[1833]: I0813 01:07:42.843812 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctnk\" (UniqueName: \"kubernetes.io/projected/c98841f2-352f-43ac-b754-01bf12142833-kube-api-access-5ctnk\") pod \"goldmane-768f4c5c69-x4s2d\" (UID: \"c98841f2-352f-43ac-b754-01bf12142833\") " pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:42.846386 kubelet[1833]: I0813 01:07:42.843828 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b7cf428-2808-4afb-aea8-f874628caa6c-calico-apiserver-certs\") pod \"calico-apiserver-67866967cc-2lw7j\" (UID: \"6b7cf428-2808-4afb-aea8-f874628caa6c\") " pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:42.846386 kubelet[1833]: I0813 01:07:42.843843 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjcw\" (UniqueName: \"kubernetes.io/projected/6b7cf428-2808-4afb-aea8-f874628caa6c-kube-api-access-vxjcw\") pod \"calico-apiserver-67866967cc-2lw7j\" (UID: \"6b7cf428-2808-4afb-aea8-f874628caa6c\") " pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:42.855079 containerd[1478]: time="2025-08-13T01:07:42.855019176Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:07:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:07:42.862740 systemd[1]: Created slice kubepods-besteffort-pod61ce1787_bb8a_413c_8736_b5b6cbd4da1d.slice - libcontainer container kubepods-besteffort-pod61ce1787_bb8a_413c_8736_b5b6cbd4da1d.slice. Aug 13 01:07:42.873868 systemd[1]: Created slice kubepods-besteffort-podc98841f2_352f_43ac_b754_01bf12142833.slice - libcontainer container kubepods-besteffort-podc98841f2_352f_43ac_b754_01bf12142833.slice. Aug 13 01:07:42.882014 systemd[1]: Created slice kubepods-besteffort-podf5ba40c2_4d45_4179_8c4f_7fe837c00595.slice - libcontainer container kubepods-besteffort-podf5ba40c2_4d45_4179_8c4f_7fe837c00595.slice. Aug 13 01:07:42.884289 containerd[1478]: time="2025-08-13T01:07:42.884160491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:07:42.884433 containerd[1478]: time="2025-08-13T01:07:42.884408821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:42.945535 kubelet[1833]: I0813 01:07:42.944382 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8mxt\" (UniqueName: \"kubernetes.io/projected/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-kube-api-access-n8mxt\") pod \"whisker-65ffbb4b4d-js9cm\" (UID: \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\") " pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:42.945535 kubelet[1833]: I0813 01:07:42.944426 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-whisker-ca-bundle\") pod \"whisker-65ffbb4b4d-js9cm\" (UID: \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\") " pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:42.945535 kubelet[1833]: I0813 01:07:42.944467 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-whisker-backend-key-pair\") pod \"whisker-65ffbb4b4d-js9cm\" (UID: \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\") " pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:42.947997 containerd[1478]: time="2025-08-13T01:07:42.947967003Z" level=error msg="Failed to destroy network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:42.950163 containerd[1478]: time="2025-08-13T01:07:42.949968554Z" level=error msg="encountered an error cleaning up failed sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:42.950163 containerd[1478]: time="2025-08-13T01:07:42.950040434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:42.951056 kubelet[1833]: E0813 01:07:42.951021 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:42.951141 kubelet[1833]: E0813 01:07:42.951126 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:42.951203 kubelet[1833]: E0813 01:07:42.951190 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:42.951329 kubelet[1833]: E0813 01:07:42.951307 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:43.139626 containerd[1478]: time="2025-08-13T01:07:43.139597738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:07:43.151774 containerd[1478]: time="2025-08-13T01:07:43.151207484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:07:43.168070 containerd[1478]: time="2025-08-13T01:07:43.168047843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:43.183053 containerd[1478]: time="2025-08-13T01:07:43.183026970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:0,}" Aug 13 01:07:43.216663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7-shm.mount: Deactivated successfully. Aug 13 01:07:43.269930 containerd[1478]: time="2025-08-13T01:07:43.269874884Z" level=error msg="Failed to destroy network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.273207 containerd[1478]: time="2025-08-13T01:07:43.271047924Z" level=error msg="encountered an error cleaning up failed sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.273207 containerd[1478]: time="2025-08-13T01:07:43.271101504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.273325 kubelet[1833]: E0813 01:07:43.272796 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.273325 kubelet[1833]: E0813 01:07:43.272849 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:43.273325 kubelet[1833]: E0813 01:07:43.272876 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:43.273409 kubelet[1833]: E0813 01:07:43.272948 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" podUID="5bf0817c-2d50-4b1c-b228-be18c278aa6e" Aug 13 01:07:43.274019 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016-shm.mount: Deactivated successfully. Aug 13 01:07:43.276488 containerd[1478]: time="2025-08-13T01:07:43.276406907Z" level=error msg="Failed to destroy network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.277798 containerd[1478]: time="2025-08-13T01:07:43.277662037Z" level=error msg="encountered an error cleaning up failed sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.279197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1-shm.mount: Deactivated successfully. Aug 13 01:07:43.279615 containerd[1478]: time="2025-08-13T01:07:43.277708757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.279930 kubelet[1833]: E0813 01:07:43.279879 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.281046 kubelet[1833]: E0813 01:07:43.280990 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:43.281046 kubelet[1833]: E0813 01:07:43.281015 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:43.282721 kubelet[1833]: E0813 01:07:43.281177 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" podUID="6b7cf428-2808-4afb-aea8-f874628caa6c" Aug 13 01:07:43.289485 containerd[1478]: time="2025-08-13T01:07:43.289433833Z" level=error msg="Failed to destroy network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.291426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548-shm.mount: Deactivated successfully. Aug 13 01:07:43.292099 containerd[1478]: time="2025-08-13T01:07:43.292057365Z" level=error msg="encountered an error cleaning up failed sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.292216 containerd[1478]: time="2025-08-13T01:07:43.292197595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.292455 kubelet[1833]: E0813 01:07:43.292428 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.292553 kubelet[1833]: E0813 01:07:43.292538 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:43.292643 kubelet[1833]: E0813 01:07:43.292627 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:43.292758 kubelet[1833]: E0813 01:07:43.292737 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65ffbb4b4d-js9cm" podUID="61ce1787-bb8a-413c-8736-b5b6cbd4da1d" Aug 13 01:07:43.307665 containerd[1478]: time="2025-08-13T01:07:43.307641972Z" level=error msg="Failed to destroy network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.307996 containerd[1478]: time="2025-08-13T01:07:43.307962933Z" level=error msg="encountered an error cleaning up failed sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.308060 containerd[1478]: time="2025-08-13T01:07:43.308026813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.308206 kubelet[1833]: E0813 01:07:43.308186 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:43.308239 kubelet[1833]: E0813 01:07:43.308229 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:43.308270 kubelet[1833]: E0813 01:07:43.308244 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:43.308308 kubelet[1833]: E0813 01:07:43.308282 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-x4s2d" podUID="c98841f2-352f-43ac-b754-01bf12142833" Aug 13 01:07:43.765923 kubelet[1833]: E0813 01:07:43.765182 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:43.893021 kubelet[1833]: I0813 01:07:43.892418 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1" Aug 13 01:07:43.893307 containerd[1478]: time="2025-08-13T01:07:43.893281805Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" Aug 13 01:07:43.893560 containerd[1478]: time="2025-08-13T01:07:43.893543185Z" level=info msg="Ensure that sandbox f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1 in task-service has been cleanup successfully" Aug 13 01:07:43.895339 kubelet[1833]: I0813 01:07:43.895207 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016" Aug 13 01:07:43.895445 containerd[1478]: time="2025-08-13T01:07:43.895426016Z" level=info msg="TearDown network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" successfully" Aug 13 01:07:43.895509 containerd[1478]: time="2025-08-13T01:07:43.895495686Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" returns successfully" Aug 13 01:07:43.895640 containerd[1478]: time="2025-08-13T01:07:43.895558536Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" Aug 13 01:07:43.896145 containerd[1478]: time="2025-08-13T01:07:43.896121856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:1,}" Aug 13 01:07:43.898196 containerd[1478]: time="2025-08-13T01:07:43.897758527Z" level=info msg="Ensure that sandbox b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016 in task-service has been cleanup successfully" Aug 13 01:07:43.900281 containerd[1478]: time="2025-08-13T01:07:43.900259569Z" level=info msg="TearDown network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" successfully" Aug 13 01:07:43.900603 containerd[1478]: time="2025-08-13T01:07:43.900278759Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" returns successfully" Aug 13 01:07:43.900712 kubelet[1833]: I0813 01:07:43.900691 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7" Aug 13 01:07:43.901076 containerd[1478]: time="2025-08-13T01:07:43.901000489Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" Aug 13 01:07:43.905577 containerd[1478]: time="2025-08-13T01:07:43.901152599Z" level=info msg="Ensure that sandbox a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7 in task-service has been cleanup successfully" Aug 13 01:07:43.905577 containerd[1478]: time="2025-08-13T01:07:43.901701119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:1,}" Aug 13 01:07:43.905577 containerd[1478]: time="2025-08-13T01:07:43.903236660Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" Aug 13 01:07:43.905577 containerd[1478]: time="2025-08-13T01:07:43.903415480Z" level=info msg="Ensure that sandbox d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940 in task-service has been cleanup successfully" Aug 13 01:07:43.905689 kubelet[1833]: I0813 01:07:43.902875 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940" Aug 13 01:07:43.905689 kubelet[1833]: I0813 01:07:43.905154 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548" Aug 13 01:07:43.907780 containerd[1478]: time="2025-08-13T01:07:43.907679382Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" Aug 13 01:07:43.908083 containerd[1478]: time="2025-08-13T01:07:43.908032892Z" level=info msg="Ensure that sandbox a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548 in task-service has been cleanup successfully" Aug 13 01:07:43.908210 containerd[1478]: time="2025-08-13T01:07:43.908183082Z" level=info msg="TearDown network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" successfully" Aug 13 01:07:43.908242 containerd[1478]: time="2025-08-13T01:07:43.908210302Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" returns successfully" Aug 13 01:07:43.908767 containerd[1478]: time="2025-08-13T01:07:43.908591353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:1,}" Aug 13 01:07:43.908767 containerd[1478]: time="2025-08-13T01:07:43.908690263Z" level=info msg="TearDown network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" successfully" Aug 13 01:07:43.908767 containerd[1478]: time="2025-08-13T01:07:43.908704723Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" returns successfully" Aug 13 01:07:43.909020 containerd[1478]: time="2025-08-13T01:07:43.908961393Z" level=info msg="TearDown network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" successfully" Aug 13 01:07:43.909020 containerd[1478]: time="2025-08-13T01:07:43.908979473Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" returns successfully" Aug 13 01:07:43.909085 containerd[1478]: time="2025-08-13T01:07:43.909042633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:1,}" Aug 13 01:07:43.909516 containerd[1478]: time="2025-08-13T01:07:43.909486463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:1,}" Aug 13 01:07:44.109566 containerd[1478]: time="2025-08-13T01:07:44.109313113Z" level=error msg="Failed to destroy network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.110299 containerd[1478]: time="2025-08-13T01:07:44.110264463Z" level=error msg="encountered an error cleaning up failed sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.110351 containerd[1478]: time="2025-08-13T01:07:44.110327293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.110802 kubelet[1833]: E0813 01:07:44.110765 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.110929 kubelet[1833]: E0813 01:07:44.110831 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:44.110929 kubelet[1833]: E0813 01:07:44.110863 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:44.111186 kubelet[1833]: E0813 01:07:44.110950 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" podUID="5bf0817c-2d50-4b1c-b228-be18c278aa6e" Aug 13 01:07:44.121225 containerd[1478]: time="2025-08-13T01:07:44.121113849Z" level=error msg="Failed to destroy network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.121748 containerd[1478]: time="2025-08-13T01:07:44.121726949Z" level=error msg="encountered an error cleaning up failed sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.121882 containerd[1478]: time="2025-08-13T01:07:44.121835349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.122240 kubelet[1833]: E0813 01:07:44.122106 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.122240 kubelet[1833]: E0813 01:07:44.122137 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:44.122240 kubelet[1833]: E0813 01:07:44.122154 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:44.122330 kubelet[1833]: E0813 01:07:44.122181 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65ffbb4b4d-js9cm" podUID="61ce1787-bb8a-413c-8736-b5b6cbd4da1d" Aug 13 01:07:44.126914 containerd[1478]: time="2025-08-13T01:07:44.126421122Z" level=error msg="Failed to destroy network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.126914 containerd[1478]: time="2025-08-13T01:07:44.126813032Z" level=error msg="encountered an error cleaning up failed sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.126914 containerd[1478]: time="2025-08-13T01:07:44.126852062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.127112 kubelet[1833]: E0813 01:07:44.127084 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.127139 kubelet[1833]: E0813 01:07:44.127115 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:44.127139 kubelet[1833]: E0813 01:07:44.127128 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:44.127182 kubelet[1833]: E0813 01:07:44.127153 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:44.128178 containerd[1478]: time="2025-08-13T01:07:44.128145862Z" level=error msg="Failed to destroy network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.128509 containerd[1478]: time="2025-08-13T01:07:44.128477393Z" level=error msg="encountered an error cleaning up failed sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.128546 containerd[1478]: time="2025-08-13T01:07:44.128517933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.128693 kubelet[1833]: E0813 01:07:44.128667 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.128789 kubelet[1833]: E0813 01:07:44.128765 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:44.129024 kubelet[1833]: E0813 01:07:44.128788 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:44.130817 kubelet[1833]: E0813 01:07:44.129055 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-x4s2d" podUID="c98841f2-352f-43ac-b754-01bf12142833" Aug 13 01:07:44.131031 containerd[1478]: time="2025-08-13T01:07:44.131008344Z" level=error msg="Failed to destroy network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.131368 containerd[1478]: time="2025-08-13T01:07:44.131346244Z" level=error msg="encountered an error cleaning up failed sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.131643 containerd[1478]: time="2025-08-13T01:07:44.131621224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.131961 kubelet[1833]: E0813 01:07:44.131878 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:44.132387 kubelet[1833]: E0813 01:07:44.132355 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:44.132387 kubelet[1833]: E0813 01:07:44.132385 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:44.133207 kubelet[1833]: E0813 01:07:44.132445 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" podUID="6b7cf428-2808-4afb-aea8-f874628caa6c" Aug 13 01:07:44.202505 systemd[1]: run-netns-cni\x2d86e5759e\x2d90d0\x2df52a\x2d8bdb\x2d9fd8f3f0b668.mount: Deactivated successfully. Aug 13 01:07:44.203647 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940-shm.mount: Deactivated successfully. Aug 13 01:07:44.203786 systemd[1]: run-netns-cni\x2d3cc833fe\x2dfc72\x2d720c\x2dbf72\x2d5a839110009f.mount: Deactivated successfully. Aug 13 01:07:44.203855 systemd[1]: run-netns-cni\x2d27c15967\x2dd7c6\x2d23ef\x2dc16a\x2da83462106fe1.mount: Deactivated successfully. Aug 13 01:07:44.203941 systemd[1]: run-netns-cni\x2dac815c6f\x2d7fc1\x2d4428\x2d5ccc\x2ddbe37dc3339b.mount: Deactivated successfully. Aug 13 01:07:44.204018 systemd[1]: run-netns-cni\x2d3f3f08df\x2d45b6\x2d36fa\x2df21c\x2d45789de98b53.mount: Deactivated successfully. Aug 13 01:07:44.765917 kubelet[1833]: E0813 01:07:44.765845 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:44.909209 kubelet[1833]: I0813 01:07:44.908598 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c" Aug 13 01:07:44.911302 containerd[1478]: time="2025-08-13T01:07:44.909275983Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\"" Aug 13 01:07:44.911302 containerd[1478]: time="2025-08-13T01:07:44.909465013Z" level=info msg="Ensure that sandbox 1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c in task-service has been cleanup successfully" Aug 13 01:07:44.911338 systemd[1]: run-netns-cni\x2d9332ed2b\x2d21d8\x2d7402\x2d1438\x2d27d69cd14691.mount: Deactivated successfully. Aug 13 01:07:44.911956 containerd[1478]: time="2025-08-13T01:07:44.911912484Z" level=info msg="TearDown network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" successfully" Aug 13 01:07:44.911956 containerd[1478]: time="2025-08-13T01:07:44.911930574Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" returns successfully" Aug 13 01:07:44.912712 containerd[1478]: time="2025-08-13T01:07:44.912626484Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" Aug 13 01:07:44.912712 containerd[1478]: time="2025-08-13T01:07:44.912737544Z" level=info msg="TearDown network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" successfully" Aug 13 01:07:44.913123 containerd[1478]: time="2025-08-13T01:07:44.912748554Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" returns successfully" Aug 13 01:07:44.914535 kubelet[1833]: I0813 01:07:44.914076 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0" Aug 13 01:07:44.914580 containerd[1478]: time="2025-08-13T01:07:44.914208545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:2,}" Aug 13 01:07:44.915928 containerd[1478]: time="2025-08-13T01:07:44.915759716Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\"" Aug 13 01:07:44.916139 containerd[1478]: time="2025-08-13T01:07:44.916123536Z" level=info msg="Ensure that sandbox 511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0 in task-service has been cleanup successfully" Aug 13 01:07:44.917051 containerd[1478]: time="2025-08-13T01:07:44.917031897Z" level=info msg="TearDown network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" successfully" Aug 13 01:07:44.917108 containerd[1478]: time="2025-08-13T01:07:44.917095227Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" returns successfully" Aug 13 01:07:44.917959 containerd[1478]: time="2025-08-13T01:07:44.917679487Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" Aug 13 01:07:44.917959 containerd[1478]: time="2025-08-13T01:07:44.917746977Z" level=info msg="TearDown network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" successfully" Aug 13 01:07:44.917959 containerd[1478]: time="2025-08-13T01:07:44.917756247Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" returns successfully" Aug 13 01:07:44.919840 containerd[1478]: time="2025-08-13T01:07:44.919392648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:2,}" Aug 13 01:07:44.920121 systemd[1]: run-netns-cni\x2d8e7248e7\x2ded3f\x2dd297\x2d1660\x2d245910a9ffae.mount: Deactivated successfully. Aug 13 01:07:44.921126 kubelet[1833]: I0813 01:07:44.920616 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5" Aug 13 01:07:44.922302 containerd[1478]: time="2025-08-13T01:07:44.922278609Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\"" Aug 13 01:07:44.922748 containerd[1478]: time="2025-08-13T01:07:44.922727889Z" level=info msg="Ensure that sandbox aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5 in task-service has been cleanup successfully" Aug 13 01:07:44.924752 systemd[1]: run-netns-cni\x2d5d07e124\x2df460\x2d36db\x2d07a4\x2df91bef89ddb6.mount: Deactivated successfully. Aug 13 01:07:44.925394 containerd[1478]: time="2025-08-13T01:07:44.924940641Z" level=info msg="TearDown network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" successfully" Aug 13 01:07:44.925394 containerd[1478]: time="2025-08-13T01:07:44.924957781Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" returns successfully" Aug 13 01:07:44.925462 containerd[1478]: time="2025-08-13T01:07:44.925280941Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" Aug 13 01:07:44.925614 containerd[1478]: time="2025-08-13T01:07:44.925483661Z" level=info msg="TearDown network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" successfully" Aug 13 01:07:44.925614 containerd[1478]: time="2025-08-13T01:07:44.925496591Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" returns successfully" Aug 13 01:07:44.927482 kubelet[1833]: I0813 01:07:44.927436 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0" Aug 13 01:07:44.928126 containerd[1478]: time="2025-08-13T01:07:44.927296292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:2,}" Aug 13 01:07:44.928474 containerd[1478]: time="2025-08-13T01:07:44.928450642Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\"" Aug 13 01:07:44.928604 containerd[1478]: time="2025-08-13T01:07:44.928583452Z" level=info msg="Ensure that sandbox 316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0 in task-service has been cleanup successfully" Aug 13 01:07:44.930352 systemd[1]: run-netns-cni\x2d4e72514c\x2dfd14\x2ddcbd\x2dc21d\x2d73d9ffc05d28.mount: Deactivated successfully. Aug 13 01:07:44.932499 kubelet[1833]: I0813 01:07:44.932486 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5" Aug 13 01:07:44.932599 containerd[1478]: time="2025-08-13T01:07:44.932573694Z" level=info msg="TearDown network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" successfully" Aug 13 01:07:44.934503 containerd[1478]: time="2025-08-13T01:07:44.934386495Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" returns successfully" Aug 13 01:07:44.934503 containerd[1478]: time="2025-08-13T01:07:44.934477735Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\"" Aug 13 01:07:44.934827 containerd[1478]: time="2025-08-13T01:07:44.934598815Z" level=info msg="Ensure that sandbox a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5 in task-service has been cleanup successfully" Aug 13 01:07:44.935378 containerd[1478]: time="2025-08-13T01:07:44.935359306Z" level=info msg="TearDown network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" successfully" Aug 13 01:07:44.935606 containerd[1478]: time="2025-08-13T01:07:44.935594226Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" returns successfully" Aug 13 01:07:44.935756 containerd[1478]: time="2025-08-13T01:07:44.935740906Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" Aug 13 01:07:44.935963 containerd[1478]: time="2025-08-13T01:07:44.935948556Z" level=info msg="TearDown network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" successfully" Aug 13 01:07:44.936012 containerd[1478]: time="2025-08-13T01:07:44.936000746Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" returns successfully" Aug 13 01:07:44.936398 containerd[1478]: time="2025-08-13T01:07:44.936374896Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" Aug 13 01:07:44.936743 containerd[1478]: time="2025-08-13T01:07:44.936529206Z" level=info msg="TearDown network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" successfully" Aug 13 01:07:44.936743 containerd[1478]: time="2025-08-13T01:07:44.936542196Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" returns successfully" Aug 13 01:07:44.937155 containerd[1478]: time="2025-08-13T01:07:44.937130947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:2,}" Aug 13 01:07:44.938163 containerd[1478]: time="2025-08-13T01:07:44.938143697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:2,}" Aug 13 01:07:45.072437 containerd[1478]: time="2025-08-13T01:07:45.071532924Z" level=error msg="Failed to destroy network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.074581 containerd[1478]: time="2025-08-13T01:07:45.074542885Z" level=error msg="encountered an error cleaning up failed sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.074814 containerd[1478]: time="2025-08-13T01:07:45.074781615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.075415 kubelet[1833]: E0813 01:07:45.075378 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.075494 kubelet[1833]: E0813 01:07:45.075435 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:45.075494 kubelet[1833]: E0813 01:07:45.075455 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:45.075794 kubelet[1833]: E0813 01:07:45.075505 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65ffbb4b4d-js9cm" podUID="61ce1787-bb8a-413c-8736-b5b6cbd4da1d" Aug 13 01:07:45.085623 containerd[1478]: time="2025-08-13T01:07:45.085352031Z" level=error msg="Failed to destroy network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.087098 containerd[1478]: time="2025-08-13T01:07:45.087065682Z" level=error msg="encountered an error cleaning up failed sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.087287 containerd[1478]: time="2025-08-13T01:07:45.087261582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.087989 kubelet[1833]: E0813 01:07:45.087959 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.088160 kubelet[1833]: E0813 01:07:45.088143 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:45.089252 kubelet[1833]: E0813 01:07:45.088956 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:45.089252 kubelet[1833]: E0813 01:07:45.089038 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" podUID="5bf0817c-2d50-4b1c-b228-be18c278aa6e" Aug 13 01:07:45.105478 containerd[1478]: time="2025-08-13T01:07:45.105452141Z" level=error msg="Failed to destroy network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.106880 containerd[1478]: time="2025-08-13T01:07:45.106850291Z" level=error msg="encountered an error cleaning up failed sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.107064 containerd[1478]: time="2025-08-13T01:07:45.107012331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.107219 kubelet[1833]: E0813 01:07:45.107175 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.107219 kubelet[1833]: E0813 01:07:45.107213 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:45.107298 kubelet[1833]: E0813 01:07:45.107276 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:45.107348 kubelet[1833]: E0813 01:07:45.107322 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-x4s2d" podUID="c98841f2-352f-43ac-b754-01bf12142833" Aug 13 01:07:45.113000 containerd[1478]: time="2025-08-13T01:07:45.112977304Z" level=error msg="Failed to destroy network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.113354 containerd[1478]: time="2025-08-13T01:07:45.113329285Z" level=error msg="encountered an error cleaning up failed sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.113421 containerd[1478]: time="2025-08-13T01:07:45.113371005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.113890 kubelet[1833]: E0813 01:07:45.113530 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.113890 kubelet[1833]: E0813 01:07:45.113580 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:45.113890 kubelet[1833]: E0813 01:07:45.113601 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:45.113997 kubelet[1833]: E0813 01:07:45.113648 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" podUID="6b7cf428-2808-4afb-aea8-f874628caa6c" Aug 13 01:07:45.115364 containerd[1478]: time="2025-08-13T01:07:45.115333716Z" level=error msg="Failed to destroy network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.115636 containerd[1478]: time="2025-08-13T01:07:45.115615246Z" level=error msg="encountered an error cleaning up failed sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.115666 containerd[1478]: time="2025-08-13T01:07:45.115652966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.115955 kubelet[1833]: E0813 01:07:45.115859 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:45.116122 kubelet[1833]: E0813 01:07:45.116022 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:45.116122 kubelet[1833]: E0813 01:07:45.116047 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:45.116287 kubelet[1833]: E0813 01:07:45.116223 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:45.200790 systemd[1]: run-netns-cni\x2d73688a45\x2db5f3\x2d6483\x2d26bb\x2d6375e58a0082.mount: Deactivated successfully. Aug 13 01:07:45.766658 kubelet[1833]: E0813 01:07:45.766540 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:45.936786 kubelet[1833]: I0813 01:07:45.936744 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6" Aug 13 01:07:45.939932 containerd[1478]: time="2025-08-13T01:07:45.937984817Z" level=info msg="StopPodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\"" Aug 13 01:07:45.939932 containerd[1478]: time="2025-08-13T01:07:45.938160517Z" level=info msg="Ensure that sandbox 47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6 in task-service has been cleanup successfully" Aug 13 01:07:45.939946 systemd[1]: run-netns-cni\x2d85b04e9b\x2d7e3b\x2d4dc5\x2d4395\x2dae0245fab7cd.mount: Deactivated successfully. Aug 13 01:07:45.941313 containerd[1478]: time="2025-08-13T01:07:45.940734228Z" level=info msg="TearDown network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" successfully" Aug 13 01:07:45.941313 containerd[1478]: time="2025-08-13T01:07:45.940752738Z" level=info msg="StopPodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" returns successfully" Aug 13 01:07:45.942000 containerd[1478]: time="2025-08-13T01:07:45.941981559Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\"" Aug 13 01:07:45.942228 containerd[1478]: time="2025-08-13T01:07:45.942144769Z" level=info msg="TearDown network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" successfully" Aug 13 01:07:45.942228 containerd[1478]: time="2025-08-13T01:07:45.942186679Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" returns successfully" Aug 13 01:07:45.942469 containerd[1478]: time="2025-08-13T01:07:45.942452119Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" Aug 13 01:07:45.943080 containerd[1478]: time="2025-08-13T01:07:45.943008019Z" level=info msg="TearDown network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" successfully" Aug 13 01:07:45.943080 containerd[1478]: time="2025-08-13T01:07:45.943021999Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" returns successfully" Aug 13 01:07:45.943260 kubelet[1833]: I0813 01:07:45.943242 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf" Aug 13 01:07:45.943922 containerd[1478]: time="2025-08-13T01:07:45.943886770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:3,}" Aug 13 01:07:45.944934 containerd[1478]: time="2025-08-13T01:07:45.944366190Z" level=info msg="StopPodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\"" Aug 13 01:07:45.944934 containerd[1478]: time="2025-08-13T01:07:45.944515190Z" level=info msg="Ensure that sandbox e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf in task-service has been cleanup successfully" Aug 13 01:07:45.947098 systemd[1]: run-netns-cni\x2df322f4ec\x2d600a\x2d66fb\x2dcb76\x2de5ab84e611ee.mount: Deactivated successfully. Aug 13 01:07:45.947427 containerd[1478]: time="2025-08-13T01:07:45.947398821Z" level=info msg="TearDown network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" successfully" Aug 13 01:07:45.949244 containerd[1478]: time="2025-08-13T01:07:45.949196582Z" level=info msg="StopPodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" returns successfully" Aug 13 01:07:45.949664 containerd[1478]: time="2025-08-13T01:07:45.949608762Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\"" Aug 13 01:07:45.949808 containerd[1478]: time="2025-08-13T01:07:45.949768533Z" level=info msg="TearDown network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" successfully" Aug 13 01:07:45.949973 containerd[1478]: time="2025-08-13T01:07:45.949918823Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" returns successfully" Aug 13 01:07:45.950408 containerd[1478]: time="2025-08-13T01:07:45.950391423Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" Aug 13 01:07:45.950632 containerd[1478]: time="2025-08-13T01:07:45.950515513Z" level=info msg="TearDown network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" successfully" Aug 13 01:07:45.950632 containerd[1478]: time="2025-08-13T01:07:45.950528113Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" returns successfully" Aug 13 01:07:45.951320 containerd[1478]: time="2025-08-13T01:07:45.951303523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:3,}" Aug 13 01:07:45.951365 kubelet[1833]: I0813 01:07:45.951309 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6" Aug 13 01:07:45.952188 containerd[1478]: time="2025-08-13T01:07:45.951960734Z" level=info msg="StopPodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\"" Aug 13 01:07:45.952188 containerd[1478]: time="2025-08-13T01:07:45.952088554Z" level=info msg="Ensure that sandbox de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6 in task-service has been cleanup successfully" Aug 13 01:07:45.952438 containerd[1478]: time="2025-08-13T01:07:45.952422284Z" level=info msg="TearDown network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" successfully" Aug 13 01:07:45.952508 containerd[1478]: time="2025-08-13T01:07:45.952495844Z" level=info msg="StopPodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" returns successfully" Aug 13 01:07:45.954955 systemd[1]: run-netns-cni\x2d16682470\x2daea2\x2d2c7a\x2de6db\x2d4316e6b9565e.mount: Deactivated successfully. Aug 13 01:07:45.956848 containerd[1478]: time="2025-08-13T01:07:45.955814966Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\"" Aug 13 01:07:45.956848 containerd[1478]: time="2025-08-13T01:07:45.955881446Z" level=info msg="TearDown network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" successfully" Aug 13 01:07:45.956848 containerd[1478]: time="2025-08-13T01:07:45.955890646Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" returns successfully" Aug 13 01:07:45.957454 containerd[1478]: time="2025-08-13T01:07:45.957188326Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" Aug 13 01:07:45.957582 containerd[1478]: time="2025-08-13T01:07:45.957526096Z" level=info msg="TearDown network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" successfully" Aug 13 01:07:45.957582 containerd[1478]: time="2025-08-13T01:07:45.957540456Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" returns successfully" Aug 13 01:07:45.958612 containerd[1478]: time="2025-08-13T01:07:45.958310147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:3,}" Aug 13 01:07:45.958972 kubelet[1833]: I0813 01:07:45.958954 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f" Aug 13 01:07:45.959777 containerd[1478]: time="2025-08-13T01:07:45.959760388Z" level=info msg="StopPodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\"" Aug 13 01:07:45.960945 containerd[1478]: time="2025-08-13T01:07:45.960925778Z" level=info msg="Ensure that sandbox b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f in task-service has been cleanup successfully" Aug 13 01:07:45.961826 containerd[1478]: time="2025-08-13T01:07:45.961501628Z" level=info msg="TearDown network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" successfully" Aug 13 01:07:45.961869 containerd[1478]: time="2025-08-13T01:07:45.961825149Z" level=info msg="StopPodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" returns successfully" Aug 13 01:07:45.962866 containerd[1478]: time="2025-08-13T01:07:45.962800609Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\"" Aug 13 01:07:45.964766 containerd[1478]: time="2025-08-13T01:07:45.963321069Z" level=info msg="TearDown network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" successfully" Aug 13 01:07:45.964766 containerd[1478]: time="2025-08-13T01:07:45.963564539Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" returns successfully" Aug 13 01:07:45.964766 containerd[1478]: time="2025-08-13T01:07:45.964272570Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" Aug 13 01:07:45.964766 containerd[1478]: time="2025-08-13T01:07:45.964349070Z" level=info msg="TearDown network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" successfully" Aug 13 01:07:45.964766 containerd[1478]: time="2025-08-13T01:07:45.964360200Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" returns successfully" Aug 13 01:07:45.966760 systemd[1]: run-netns-cni\x2d759a95f5\x2df6cd\x2d2dbc\x2d2a98\x2d773a53b2113c.mount: Deactivated successfully. Aug 13 01:07:45.967493 kubelet[1833]: I0813 01:07:45.967117 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb" Aug 13 01:07:45.968774 containerd[1478]: time="2025-08-13T01:07:45.967972882Z" level=info msg="StopPodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\"" Aug 13 01:07:45.968774 containerd[1478]: time="2025-08-13T01:07:45.968122482Z" level=info msg="Ensure that sandbox 937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb in task-service has been cleanup successfully" Aug 13 01:07:45.968774 containerd[1478]: time="2025-08-13T01:07:45.968259632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:3,}" Aug 13 01:07:45.968774 containerd[1478]: time="2025-08-13T01:07:45.968699482Z" level=info msg="TearDown network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" successfully" Aug 13 01:07:45.968774 containerd[1478]: time="2025-08-13T01:07:45.968713502Z" level=info msg="StopPodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" returns successfully" Aug 13 01:07:45.969041 containerd[1478]: time="2025-08-13T01:07:45.968952692Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\"" Aug 13 01:07:45.969041 containerd[1478]: time="2025-08-13T01:07:45.969022242Z" level=info msg="TearDown network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" successfully" Aug 13 01:07:45.969041 containerd[1478]: time="2025-08-13T01:07:45.969032452Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" returns successfully" Aug 13 01:07:45.969935 containerd[1478]: time="2025-08-13T01:07:45.969463882Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" Aug 13 01:07:45.969935 containerd[1478]: time="2025-08-13T01:07:45.969714923Z" level=info msg="TearDown network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" successfully" Aug 13 01:07:45.969935 containerd[1478]: time="2025-08-13T01:07:45.969724263Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" returns successfully" Aug 13 01:07:45.972359 containerd[1478]: time="2025-08-13T01:07:45.972341104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:3,}" Aug 13 01:07:46.147822 containerd[1478]: time="2025-08-13T01:07:46.147758101Z" level=error msg="Failed to destroy network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.149968 containerd[1478]: time="2025-08-13T01:07:46.149941413Z" level=error msg="encountered an error cleaning up failed sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.150047 containerd[1478]: time="2025-08-13T01:07:46.150008963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.150289 kubelet[1833]: E0813 01:07:46.150248 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.150332 kubelet[1833]: E0813 01:07:46.150302 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:46.150332 kubelet[1833]: E0813 01:07:46.150322 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:46.150396 kubelet[1833]: E0813 01:07:46.150371 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-x4s2d" podUID="c98841f2-352f-43ac-b754-01bf12142833" Aug 13 01:07:46.157942 containerd[1478]: time="2025-08-13T01:07:46.157872157Z" level=error msg="Failed to destroy network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.159928 containerd[1478]: time="2025-08-13T01:07:46.158486897Z" level=error msg="encountered an error cleaning up failed sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.159928 containerd[1478]: time="2025-08-13T01:07:46.158549937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.160008 kubelet[1833]: E0813 01:07:46.158892 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.160008 kubelet[1833]: E0813 01:07:46.158962 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:46.160008 kubelet[1833]: E0813 01:07:46.159007 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:46.160077 kubelet[1833]: E0813 01:07:46.159044 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:46.182075 containerd[1478]: time="2025-08-13T01:07:46.182041939Z" level=error msg="Failed to destroy network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.182582 containerd[1478]: time="2025-08-13T01:07:46.182547019Z" level=error msg="encountered an error cleaning up failed sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.182637 containerd[1478]: time="2025-08-13T01:07:46.182602709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.182818 kubelet[1833]: E0813 01:07:46.182765 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.182874 kubelet[1833]: E0813 01:07:46.182832 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:46.182874 kubelet[1833]: E0813 01:07:46.182850 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:46.183021 kubelet[1833]: E0813 01:07:46.182892 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" podUID="6b7cf428-2808-4afb-aea8-f874628caa6c" Aug 13 01:07:46.184993 containerd[1478]: time="2025-08-13T01:07:46.184970210Z" level=error msg="Failed to destroy network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.185307 containerd[1478]: time="2025-08-13T01:07:46.185285140Z" level=error msg="encountered an error cleaning up failed sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.185587 containerd[1478]: time="2025-08-13T01:07:46.185569030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.185918 kubelet[1833]: E0813 01:07:46.185768 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.185992 kubelet[1833]: E0813 01:07:46.185954 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:46.185992 kubelet[1833]: E0813 01:07:46.185971 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:46.186044 kubelet[1833]: E0813 01:07:46.186013 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" podUID="5bf0817c-2d50-4b1c-b228-be18c278aa6e" Aug 13 01:07:46.191225 containerd[1478]: time="2025-08-13T01:07:46.191200683Z" level=error msg="Failed to destroy network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.191637 containerd[1478]: time="2025-08-13T01:07:46.191608793Z" level=error msg="encountered an error cleaning up failed sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.191671 containerd[1478]: time="2025-08-13T01:07:46.191645473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.191765 kubelet[1833]: E0813 01:07:46.191743 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:46.191813 kubelet[1833]: E0813 01:07:46.191773 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:46.191813 kubelet[1833]: E0813 01:07:46.191787 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:46.191860 kubelet[1833]: E0813 01:07:46.191813 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65ffbb4b4d-js9cm" podUID="61ce1787-bb8a-413c-8736-b5b6cbd4da1d" Aug 13 01:07:46.201754 systemd[1]: run-netns-cni\x2d018b3f43\x2ddc50\x2db7ca\x2df40f\x2d08a529f18476.mount: Deactivated successfully. Aug 13 01:07:46.608764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799261974.mount: Deactivated successfully. Aug 13 01:07:46.640768 containerd[1478]: time="2025-08-13T01:07:46.640719278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:46.641337 containerd[1478]: time="2025-08-13T01:07:46.641305698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:07:46.641930 containerd[1478]: time="2025-08-13T01:07:46.641804148Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:46.643226 containerd[1478]: time="2025-08-13T01:07:46.643192919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:46.643757 containerd[1478]: time="2025-08-13T01:07:46.643733729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.759547868s" Aug 13 01:07:46.643846 containerd[1478]: time="2025-08-13T01:07:46.643829569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:07:46.661680 containerd[1478]: time="2025-08-13T01:07:46.661655408Z" level=info msg="CreateContainer within sandbox \"f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:07:46.684412 containerd[1478]: time="2025-08-13T01:07:46.684344340Z" level=info msg="CreateContainer within sandbox \"f71baafc742d124e52c258dbcf4c28769c4c7f4b8f3f3732cdd4db2517d77b51\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5d59110a924fc762700ae870691836b24995e5997a445ac96fbe97a3c38ef53a\"" Aug 13 01:07:46.684739 containerd[1478]: time="2025-08-13T01:07:46.684720660Z" level=info msg="StartContainer for \"5d59110a924fc762700ae870691836b24995e5997a445ac96fbe97a3c38ef53a\"" Aug 13 01:07:46.713083 systemd[1]: Started cri-containerd-5d59110a924fc762700ae870691836b24995e5997a445ac96fbe97a3c38ef53a.scope - libcontainer container 5d59110a924fc762700ae870691836b24995e5997a445ac96fbe97a3c38ef53a. Aug 13 01:07:46.751048 containerd[1478]: time="2025-08-13T01:07:46.751018743Z" level=info msg="StartContainer for \"5d59110a924fc762700ae870691836b24995e5997a445ac96fbe97a3c38ef53a\" returns successfully" Aug 13 01:07:46.767222 kubelet[1833]: E0813 01:07:46.767167 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:46.829786 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:07:46.829869 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:07:46.864207 systemd[1]: Created slice kubepods-besteffort-pod6538eb20_ef28_42cc_a8f0_f2d5f23ae51f.slice - libcontainer container kubepods-besteffort-pod6538eb20_ef28_42cc_a8f0_f2d5f23ae51f.slice. Aug 13 01:07:46.870366 kubelet[1833]: I0813 01:07:46.870341 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2rtm\" (UniqueName: \"kubernetes.io/projected/6538eb20-ef28-42cc-a8f0-f2d5f23ae51f-kube-api-access-r2rtm\") pod \"nginx-deployment-7fcdb87857-rf29h\" (UID: \"6538eb20-ef28-42cc-a8f0-f2d5f23ae51f\") " pod="default/nginx-deployment-7fcdb87857-rf29h" Aug 13 01:07:46.975923 kubelet[1833]: I0813 01:07:46.974585 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2" Aug 13 01:07:46.976225 containerd[1478]: time="2025-08-13T01:07:46.975161185Z" level=info msg="StopPodSandbox for \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\"" Aug 13 01:07:46.976225 containerd[1478]: time="2025-08-13T01:07:46.975305165Z" level=info msg="Ensure that sandbox d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2 in task-service has been cleanup successfully" Aug 13 01:07:46.976225 containerd[1478]: time="2025-08-13T01:07:46.975676235Z" level=info msg="TearDown network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\" successfully" Aug 13 01:07:46.976225 containerd[1478]: time="2025-08-13T01:07:46.975689945Z" level=info msg="StopPodSandbox for \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\" returns successfully" Aug 13 01:07:46.976225 containerd[1478]: time="2025-08-13T01:07:46.976001455Z" level=info msg="StopPodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\"" Aug 13 01:07:46.976225 containerd[1478]: time="2025-08-13T01:07:46.976069425Z" level=info msg="TearDown network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" successfully" Aug 13 01:07:46.976225 containerd[1478]: time="2025-08-13T01:07:46.976077905Z" level=info msg="StopPodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" returns successfully" Aug 13 01:07:46.976604 containerd[1478]: time="2025-08-13T01:07:46.976362266Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\"" Aug 13 01:07:46.976604 containerd[1478]: time="2025-08-13T01:07:46.976424386Z" level=info msg="TearDown network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" successfully" Aug 13 01:07:46.976604 containerd[1478]: time="2025-08-13T01:07:46.976432806Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" returns successfully" Aug 13 01:07:46.976850 containerd[1478]: time="2025-08-13T01:07:46.976694636Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" Aug 13 01:07:46.976850 containerd[1478]: time="2025-08-13T01:07:46.976760146Z" level=info msg="TearDown network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" successfully" Aug 13 01:07:46.976850 containerd[1478]: time="2025-08-13T01:07:46.976768586Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" returns successfully" Aug 13 01:07:46.977409 containerd[1478]: time="2025-08-13T01:07:46.977183126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:4,}" Aug 13 01:07:46.977974 kubelet[1833]: I0813 01:07:46.977950 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd" Aug 13 01:07:46.982696 containerd[1478]: time="2025-08-13T01:07:46.979257547Z" level=info msg="StopPodSandbox for \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\"" Aug 13 01:07:46.982696 containerd[1478]: time="2025-08-13T01:07:46.982180978Z" level=info msg="Ensure that sandbox 54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd in task-service has been cleanup successfully" Aug 13 01:07:46.983585 containerd[1478]: time="2025-08-13T01:07:46.983477649Z" level=info msg="TearDown network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\" successfully" Aug 13 01:07:46.983920 containerd[1478]: time="2025-08-13T01:07:46.983536029Z" level=info msg="StopPodSandbox for \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\" returns successfully" Aug 13 01:07:46.984406 containerd[1478]: time="2025-08-13T01:07:46.984372990Z" level=info msg="StopPodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\"" Aug 13 01:07:46.984468 containerd[1478]: time="2025-08-13T01:07:46.984447430Z" level=info msg="TearDown network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" successfully" Aug 13 01:07:46.984468 containerd[1478]: time="2025-08-13T01:07:46.984463810Z" level=info msg="StopPodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" returns successfully" Aug 13 01:07:46.984623 kubelet[1833]: I0813 01:07:46.984598 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4" Aug 13 01:07:46.984972 containerd[1478]: time="2025-08-13T01:07:46.984945390Z" level=info msg="StopPodSandbox for \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\"" Aug 13 01:07:46.985087 containerd[1478]: time="2025-08-13T01:07:46.985033080Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\"" Aug 13 01:07:46.985087 containerd[1478]: time="2025-08-13T01:07:46.985067770Z" level=info msg="Ensure that sandbox c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4 in task-service has been cleanup successfully" Aug 13 01:07:46.985292 containerd[1478]: time="2025-08-13T01:07:46.985205770Z" level=info msg="TearDown network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\" successfully" Aug 13 01:07:46.985292 containerd[1478]: time="2025-08-13T01:07:46.985221170Z" level=info msg="StopPodSandbox for \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\" returns successfully" Aug 13 01:07:46.985292 containerd[1478]: time="2025-08-13T01:07:46.985244840Z" level=info msg="TearDown network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" successfully" Aug 13 01:07:46.985292 containerd[1478]: time="2025-08-13T01:07:46.985257300Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" returns successfully" Aug 13 01:07:46.985849 containerd[1478]: time="2025-08-13T01:07:46.985822000Z" level=info msg="StopPodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\"" Aug 13 01:07:46.986083 containerd[1478]: time="2025-08-13T01:07:46.986057480Z" level=info msg="TearDown network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" successfully" Aug 13 01:07:46.986083 containerd[1478]: time="2025-08-13T01:07:46.986078950Z" level=info msg="StopPodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" returns successfully" Aug 13 01:07:46.986389 containerd[1478]: time="2025-08-13T01:07:46.986282070Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" Aug 13 01:07:46.986615 containerd[1478]: time="2025-08-13T01:07:46.986586341Z" level=info msg="TearDown network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" successfully" Aug 13 01:07:46.986615 containerd[1478]: time="2025-08-13T01:07:46.986604621Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" returns successfully" Aug 13 01:07:46.987190 containerd[1478]: time="2025-08-13T01:07:46.987164331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:4,}" Aug 13 01:07:46.987348 containerd[1478]: time="2025-08-13T01:07:46.987322041Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\"" Aug 13 01:07:46.987510 containerd[1478]: time="2025-08-13T01:07:46.987394561Z" level=info msg="TearDown network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" successfully" Aug 13 01:07:46.987510 containerd[1478]: time="2025-08-13T01:07:46.987408101Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" returns successfully" Aug 13 01:07:46.987802 containerd[1478]: time="2025-08-13T01:07:46.987774681Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" Aug 13 01:07:46.987937 containerd[1478]: time="2025-08-13T01:07:46.987889191Z" level=info msg="TearDown network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" successfully" Aug 13 01:07:46.987937 containerd[1478]: time="2025-08-13T01:07:46.987933911Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" returns successfully" Aug 13 01:07:46.988688 containerd[1478]: time="2025-08-13T01:07:46.988664612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:4,}" Aug 13 01:07:46.989406 kubelet[1833]: I0813 01:07:46.989366 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295" Aug 13 01:07:46.991066 containerd[1478]: time="2025-08-13T01:07:46.991046793Z" level=info msg="StopPodSandbox for \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\"" Aug 13 01:07:46.991336 containerd[1478]: time="2025-08-13T01:07:46.991317933Z" level=info msg="Ensure that sandbox b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295 in task-service has been cleanup successfully" Aug 13 01:07:46.992370 containerd[1478]: time="2025-08-13T01:07:46.992349174Z" level=info msg="TearDown network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\" successfully" Aug 13 01:07:46.992844 containerd[1478]: time="2025-08-13T01:07:46.992807124Z" level=info msg="StopPodSandbox for \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\" returns successfully" Aug 13 01:07:46.993641 containerd[1478]: time="2025-08-13T01:07:46.993297774Z" level=info msg="StopPodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\"" Aug 13 01:07:46.993702 containerd[1478]: time="2025-08-13T01:07:46.993670274Z" level=info msg="TearDown network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" successfully" Aug 13 01:07:46.993702 containerd[1478]: time="2025-08-13T01:07:46.993682204Z" level=info msg="StopPodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" returns successfully" Aug 13 01:07:46.997016 containerd[1478]: time="2025-08-13T01:07:46.996985996Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\"" Aug 13 01:07:46.997157 containerd[1478]: time="2025-08-13T01:07:46.997142656Z" level=info msg="TearDown network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" successfully" Aug 13 01:07:46.997206 containerd[1478]: time="2025-08-13T01:07:46.997193936Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" returns successfully" Aug 13 01:07:46.998600 containerd[1478]: time="2025-08-13T01:07:46.997792606Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" Aug 13 01:07:46.998600 containerd[1478]: time="2025-08-13T01:07:46.997864776Z" level=info msg="TearDown network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" successfully" Aug 13 01:07:46.998600 containerd[1478]: time="2025-08-13T01:07:46.997873546Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" returns successfully" Aug 13 01:07:46.998707 kubelet[1833]: I0813 01:07:46.998072 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e" Aug 13 01:07:46.999457 containerd[1478]: time="2025-08-13T01:07:46.999386137Z" level=info msg="StopPodSandbox for \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\"" Aug 13 01:07:46.999771 containerd[1478]: time="2025-08-13T01:07:46.999714247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:4,}" Aug 13 01:07:46.999911 containerd[1478]: time="2025-08-13T01:07:46.999857727Z" level=info msg="Ensure that sandbox db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e in task-service has been cleanup successfully" Aug 13 01:07:47.001819 containerd[1478]: time="2025-08-13T01:07:47.001749078Z" level=info msg="TearDown network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\" successfully" Aug 13 01:07:47.001819 containerd[1478]: time="2025-08-13T01:07:47.001767078Z" level=info msg="StopPodSandbox for \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\" returns successfully" Aug 13 01:07:47.003974 containerd[1478]: time="2025-08-13T01:07:47.003781429Z" level=info msg="StopPodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\"" Aug 13 01:07:47.003974 containerd[1478]: time="2025-08-13T01:07:47.003858119Z" level=info msg="TearDown network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" successfully" Aug 13 01:07:47.003974 containerd[1478]: time="2025-08-13T01:07:47.003867469Z" level=info msg="StopPodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" returns successfully" Aug 13 01:07:47.005207 containerd[1478]: time="2025-08-13T01:07:47.005178040Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\"" Aug 13 01:07:47.005286 containerd[1478]: time="2025-08-13T01:07:47.005261030Z" level=info msg="TearDown network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" successfully" Aug 13 01:07:47.005286 containerd[1478]: time="2025-08-13T01:07:47.005279310Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" returns successfully" Aug 13 01:07:47.005866 containerd[1478]: time="2025-08-13T01:07:47.005840390Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" Aug 13 01:07:47.006178 containerd[1478]: time="2025-08-13T01:07:47.005950320Z" level=info msg="TearDown network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" successfully" Aug 13 01:07:47.006178 containerd[1478]: time="2025-08-13T01:07:47.005961010Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" returns successfully" Aug 13 01:07:47.006642 containerd[1478]: time="2025-08-13T01:07:47.006613951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:4,}" Aug 13 01:07:47.129715 kubelet[1833]: I0813 01:07:47.128650 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zkd5h" podStartSLOduration=3.650346516 podStartE2EDuration="14.128634772s" podCreationTimestamp="2025-08-13 01:07:33 +0000 UTC" firstStartedPulling="2025-08-13 01:07:36.168566115 +0000 UTC m=+4.067684863" lastFinishedPulling="2025-08-13 01:07:46.646854361 +0000 UTC m=+14.545973119" observedRunningTime="2025-08-13 01:07:47.127700161 +0000 UTC m=+15.026818909" watchObservedRunningTime="2025-08-13 01:07:47.128634772 +0000 UTC m=+15.027753520" Aug 13 01:07:47.141646 containerd[1478]: time="2025-08-13T01:07:47.141282938Z" level=error msg="Failed to destroy network for sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.142546 containerd[1478]: time="2025-08-13T01:07:47.142216748Z" level=error msg="encountered an error cleaning up failed sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.142546 containerd[1478]: time="2025-08-13T01:07:47.142267548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.143099 kubelet[1833]: E0813 01:07:47.142381 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.143099 kubelet[1833]: E0813 01:07:47.142418 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:47.143099 kubelet[1833]: E0813 01:07:47.142440 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:47.143197 kubelet[1833]: E0813 01:07:47.142485 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65ffbb4b4d-js9cm" podUID="61ce1787-bb8a-413c-8736-b5b6cbd4da1d" Aug 13 01:07:47.168914 containerd[1478]: time="2025-08-13T01:07:47.168481362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rf29h,Uid:6538eb20-ef28-42cc-a8f0-f2d5f23ae51f,Namespace:default,Attempt:0,}" Aug 13 01:07:47.214959 systemd[1]: run-netns-cni\x2ddc2b024d\x2df597\x2dbd84\x2d0057\x2d0fa0a2e4dc5b.mount: Deactivated successfully. Aug 13 01:07:47.215064 systemd[1]: run-netns-cni\x2d6d746e14\x2d684a\x2d8d42\x2d6210\x2d3340fe03b8f4.mount: Deactivated successfully. Aug 13 01:07:47.215136 systemd[1]: run-netns-cni\x2d508f8191\x2df1b5\x2de0ea\x2d00e3\x2d54e8a28317bf.mount: Deactivated successfully. Aug 13 01:07:47.215203 systemd[1]: run-netns-cni\x2d1b5058b4\x2df873\x2d24f0\x2ded89\x2d7d6109d2a095.mount: Deactivated successfully. Aug 13 01:07:47.215272 systemd[1]: run-netns-cni\x2d020f6af9\x2d82ae\x2de89c\x2da5f7\x2d712143837622.mount: Deactivated successfully. Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.303 [INFO][3136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.304 [INFO][3136] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" iface="eth0" netns="/var/run/netns/cni-ac73ad63-720b-88e7-930f-3712c6239c73" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.304 [INFO][3136] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" iface="eth0" netns="/var/run/netns/cni-ac73ad63-720b-88e7-930f-3712c6239c73" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.304 [INFO][3136] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" iface="eth0" netns="/var/run/netns/cni-ac73ad63-720b-88e7-930f-3712c6239c73" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.304 [INFO][3136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.304 [INFO][3136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.327 [INFO][3179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" HandleID="k8s-pod-network.133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" Workload="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.327 [INFO][3179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.327 [INFO][3179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.370 [WARNING][3179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" HandleID="k8s-pod-network.133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" Workload="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.370 [INFO][3179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" HandleID="k8s-pod-network.133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" Workload="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.375 [INFO][3179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:47.396180 containerd[1478]: 2025-08-13 01:07:47.388 [INFO][3136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c" Aug 13 01:07:47.400957 systemd[1]: run-netns-cni\x2dac73ad63\x2d720b\x2d88e7\x2d930f\x2d3712c6239c73.mount: Deactivated successfully. Aug 13 01:07:47.406707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c-shm.mount: Deactivated successfully. Aug 13 01:07:47.410757 containerd[1478]: time="2025-08-13T01:07:47.410652353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.411052 kubelet[1833]: E0813 01:07:47.410985 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.411243 kubelet[1833]: E0813 01:07:47.411159 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:47.411243 kubelet[1833]: E0813 01:07:47.411186 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:07:47.411331 kubelet[1833]: E0813 01:07:47.411229 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc77x_calico-system(f5ba40c2-4d45-4179-8c4f-7fe837c00595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"133efb35f18d739dea7e6407d7936dee32edbee63b21601f524887f67ea2177c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc77x" podUID="f5ba40c2-4d45-4179-8c4f-7fe837c00595" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.324 [INFO][3141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.324 [INFO][3141] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" iface="eth0" netns="/var/run/netns/cni-cef7aef7-1712-8ead-c0d2-6a052fb6967b" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.325 [INFO][3141] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" iface="eth0" netns="/var/run/netns/cni-cef7aef7-1712-8ead-c0d2-6a052fb6967b" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.325 [INFO][3141] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" iface="eth0" netns="/var/run/netns/cni-cef7aef7-1712-8ead-c0d2-6a052fb6967b" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.325 [INFO][3141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.325 [INFO][3141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.346 [INFO][3186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" HandleID="k8s-pod-network.92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.346 [INFO][3186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.375 [INFO][3186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.391 [WARNING][3186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" HandleID="k8s-pod-network.92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.391 [INFO][3186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" HandleID="k8s-pod-network.92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.399 [INFO][3186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:47.414330 containerd[1478]: 2025-08-13 01:07:47.410 [INFO][3141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8" Aug 13 01:07:47.416424 systemd[1]: run-netns-cni\x2dcef7aef7\x2d1712\x2d8ead\x2dc0d2\x2d6a052fb6967b.mount: Deactivated successfully. Aug 13 01:07:47.417253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8-shm.mount: Deactivated successfully. Aug 13 01:07:47.422635 containerd[1478]: time="2025-08-13T01:07:47.421347758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.423105 kubelet[1833]: E0813 01:07:47.422977 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.423105 kubelet[1833]: E0813 01:07:47.423029 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:47.423105 kubelet[1833]: E0813 01:07:47.423048 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:07:47.423198 kubelet[1833]: E0813 01:07:47.423084 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-2lw7j_calico-apiserver(6b7cf428-2808-4afb-aea8-f874628caa6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92c70dac0a02cd3260e66a1b12d4718fc68c8cbe2593e465000ff3d94af60ca8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" podUID="6b7cf428-2808-4afb-aea8-f874628caa6c" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.361 [INFO][3108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.362 [INFO][3108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" iface="eth0" netns="/var/run/netns/cni-711771d5-5c4d-eee0-e5c1-d31513eb3753" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.362 [INFO][3108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" iface="eth0" netns="/var/run/netns/cni-711771d5-5c4d-eee0-e5c1-d31513eb3753" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.362 [INFO][3108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" iface="eth0" netns="/var/run/netns/cni-711771d5-5c4d-eee0-e5c1-d31513eb3753" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.362 [INFO][3108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.362 [INFO][3108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.414 [INFO][3192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" HandleID="k8s-pod-network.134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.416 [INFO][3192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.416 [INFO][3192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.427 [WARNING][3192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" HandleID="k8s-pod-network.134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.427 [INFO][3192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" HandleID="k8s-pod-network.134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.430 [INFO][3192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:47.434686 containerd[1478]: 2025-08-13 01:07:47.433 [INFO][3108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105" Aug 13 01:07:47.437034 systemd[1]: run-netns-cni\x2d711771d5\x2d5c4d\x2deee0\x2de5c1\x2dd31513eb3753.mount: Deactivated successfully. Aug 13 01:07:47.437770 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105-shm.mount: Deactivated successfully. Aug 13 01:07:47.438261 containerd[1478]: time="2025-08-13T01:07:47.437781556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.439035 kubelet[1833]: E0813 01:07:47.438818 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.439035 kubelet[1833]: E0813 01:07:47.438855 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:47.439035 kubelet[1833]: E0813 01:07:47.438873 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:47.440102 kubelet[1833]: E0813 01:07:47.439855 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"134d76b357db31811d967eeacf29734c4884990e867f9d65f95e0611b2d46105\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-x4s2d" podUID="c98841f2-352f-43ac-b754-01bf12142833" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.367 [INFO][3146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.367 [INFO][3146] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" iface="eth0" netns="/var/run/netns/cni-611613f0-2944-7d7e-2804-fb9e1086a746" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.368 [INFO][3146] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" iface="eth0" netns="/var/run/netns/cni-611613f0-2944-7d7e-2804-fb9e1086a746" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.369 [INFO][3146] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" iface="eth0" netns="/var/run/netns/cni-611613f0-2944-7d7e-2804-fb9e1086a746" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.369 [INFO][3146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.369 [INFO][3146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.424 [INFO][3197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" HandleID="k8s-pod-network.34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.424 [INFO][3197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.430 [INFO][3197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.444 [WARNING][3197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" HandleID="k8s-pod-network.34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.445 [INFO][3197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" HandleID="k8s-pod-network.34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.448 [INFO][3197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:47.451835 containerd[1478]: 2025-08-13 01:07:47.450 [INFO][3146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe" Aug 13 01:07:47.454242 containerd[1478]: time="2025-08-13T01:07:47.454131454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.454598 kubelet[1833]: E0813 01:07:47.454504 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:07:47.454598 kubelet[1833]: E0813 01:07:47.454541 1833 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:47.454598 kubelet[1833]: E0813 01:07:47.454559 1833 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:47.454814 kubelet[1833]: E0813 01:07:47.454774 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67866967cc-xcncr_calico-apiserver(5bf0817c-2d50-4b1c-b228-be18c278aa6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" podUID="5bf0817c-2d50-4b1c-b228-be18c278aa6e" Aug 13 01:07:47.542534 systemd-networkd[1393]: cali1a3df9e0403: Link UP Aug 13 01:07:47.542737 systemd-networkd[1393]: cali1a3df9e0403: Gained carrier Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.275 [INFO][3157] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.330 [INFO][3157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0 nginx-deployment-7fcdb87857- default 6538eb20-ef28-42cc-a8f0-f2d5f23ae51f 5935 0 2025-08-13 01:07:46 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 192.168.133.100 nginx-deployment-7fcdb87857-rf29h eth0 default [] [] [kns.default ksa.default.default] cali1a3df9e0403 [] [] }} ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Namespace="default" Pod="nginx-deployment-7fcdb87857-rf29h" WorkloadEndpoint="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.330 [INFO][3157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Namespace="default" Pod="nginx-deployment-7fcdb87857-rf29h" WorkloadEndpoint="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.425 [INFO][3199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.425 [INFO][3199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5aa0), Attrs:map[string]string{"namespace":"default", "node":"192.168.133.100", "pod":"nginx-deployment-7fcdb87857-rf29h", "timestamp":"2025-08-13 01:07:47.42520721 +0000 UTC"}, Hostname:"192.168.133.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.425 [INFO][3199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.448 [INFO][3199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.448 [INFO][3199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.133.100' Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.475 [INFO][3199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.498 [INFO][3199] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.512 [INFO][3199] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.515 [INFO][3199] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.521 [INFO][3199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.521 [INFO][3199] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.523 [INFO][3199] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816 Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.526 [INFO][3199] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.532 [INFO][3199] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.129/26] block=192.168.23.128/26 handle="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.532 [INFO][3199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.129/26] handle="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" host="192.168.133.100" Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.532 [INFO][3199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:47.567172 containerd[1478]: 2025-08-13 01:07:47.532 [INFO][3199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.129/26] IPv6=[] ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:07:47.567634 containerd[1478]: 2025-08-13 01:07:47.535 [INFO][3157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Namespace="default" Pod="nginx-deployment-7fcdb87857-rf29h" WorkloadEndpoint="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"6538eb20-ef28-42cc-a8f0-f2d5f23ae51f", ResourceVersion:"5935", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-rf29h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1a3df9e0403", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:47.567634 containerd[1478]: 2025-08-13 01:07:47.535 [INFO][3157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.129/32] ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Namespace="default" Pod="nginx-deployment-7fcdb87857-rf29h" WorkloadEndpoint="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:07:47.567634 containerd[1478]: 2025-08-13 01:07:47.535 [INFO][3157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a3df9e0403 ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Namespace="default" Pod="nginx-deployment-7fcdb87857-rf29h" WorkloadEndpoint="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:07:47.567634 containerd[1478]: 2025-08-13 01:07:47.544 [INFO][3157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Namespace="default" Pod="nginx-deployment-7fcdb87857-rf29h" WorkloadEndpoint="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:07:47.567634 containerd[1478]: 2025-08-13 01:07:47.545 [INFO][3157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Namespace="default" Pod="nginx-deployment-7fcdb87857-rf29h" WorkloadEndpoint="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"6538eb20-ef28-42cc-a8f0-f2d5f23ae51f", ResourceVersion:"5935", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816", Pod:"nginx-deployment-7fcdb87857-rf29h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1a3df9e0403", MAC:"72:94:17:fd:0f:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:47.567634 containerd[1478]: 2025-08-13 01:07:47.565 [INFO][3157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Namespace="default" Pod="nginx-deployment-7fcdb87857-rf29h" WorkloadEndpoint="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:07:47.583712 containerd[1478]: time="2025-08-13T01:07:47.583540309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:47.583880 containerd[1478]: time="2025-08-13T01:07:47.583627719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:47.583880 containerd[1478]: time="2025-08-13T01:07:47.583783909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:47.584733 containerd[1478]: time="2025-08-13T01:07:47.584558259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:47.602026 systemd[1]: Started cri-containerd-61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816.scope - libcontainer container 61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816. Aug 13 01:07:47.639227 containerd[1478]: time="2025-08-13T01:07:47.639197617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rf29h,Uid:6538eb20-ef28-42cc-a8f0-f2d5f23ae51f,Namespace:default,Attempt:0,} returns sandbox id \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\"" Aug 13 01:07:47.640992 containerd[1478]: time="2025-08-13T01:07:47.640719727Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 01:07:47.767931 kubelet[1833]: E0813 01:07:47.767783 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:48.004887 kubelet[1833]: I0813 01:07:48.003634 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004272599Z" level=info msg="StopPodSandbox for \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\"" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004360109Z" level=info msg="TearDown network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\" successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004371769Z" level=info msg="StopPodSandbox for \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\" returns successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004426839Z" level=info msg="StopPodSandbox for \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\"" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004490459Z" level=info msg="TearDown network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\" successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004498939Z" level=info msg="StopPodSandbox for \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\" returns successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004528979Z" level=info msg="StopPodSandbox for \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\"" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004675019Z" level=info msg="Ensure that sandbox 097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6 in task-service has been cleanup successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004834779Z" level=info msg="TearDown network for sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\" successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.004846089Z" level=info msg="StopPodSandbox for \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\" returns successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.005109940Z" level=info msg="StopPodSandbox for \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\"" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.005177640Z" level=info msg="TearDown network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\" successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.005186240Z" level=info msg="StopPodSandbox for \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\" returns successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.005234590Z" level=info msg="StopPodSandbox for \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\"" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.005297880Z" level=info msg="TearDown network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\" successfully" Aug 13 01:07:48.005373 containerd[1478]: time="2025-08-13T01:07:48.005305610Z" level=info msg="StopPodSandbox for \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\" returns successfully" Aug 13 01:07:48.007054 containerd[1478]: time="2025-08-13T01:07:48.007030030Z" level=info msg="StopPodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\"" Aug 13 01:07:48.007359 containerd[1478]: time="2025-08-13T01:07:48.007332011Z" level=info msg="TearDown network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" successfully" Aug 13 01:07:48.007359 containerd[1478]: time="2025-08-13T01:07:48.007352531Z" level=info msg="StopPodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" returns successfully" Aug 13 01:07:48.007635 containerd[1478]: time="2025-08-13T01:07:48.007190741Z" level=info msg="StopPodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\"" Aug 13 01:07:48.007635 containerd[1478]: time="2025-08-13T01:07:48.007630101Z" level=info msg="TearDown network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" successfully" Aug 13 01:07:48.007683 containerd[1478]: time="2025-08-13T01:07:48.007638361Z" level=info msg="StopPodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" returns successfully" Aug 13 01:07:48.007683 containerd[1478]: time="2025-08-13T01:07:48.007242491Z" level=info msg="StopPodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\"" Aug 13 01:07:48.007732 containerd[1478]: time="2025-08-13T01:07:48.007705941Z" level=info msg="TearDown network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" successfully" Aug 13 01:07:48.007732 containerd[1478]: time="2025-08-13T01:07:48.007713751Z" level=info msg="StopPodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" returns successfully" Aug 13 01:07:48.007732 containerd[1478]: time="2025-08-13T01:07:48.007254561Z" level=info msg="StopPodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\"" Aug 13 01:07:48.007793 containerd[1478]: time="2025-08-13T01:07:48.007780801Z" level=info msg="TearDown network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" successfully" Aug 13 01:07:48.007793 containerd[1478]: time="2025-08-13T01:07:48.007790151Z" level=info msg="StopPodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" returns successfully" Aug 13 01:07:48.007838 containerd[1478]: time="2025-08-13T01:07:48.007264861Z" level=info msg="StopPodSandbox for \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\"" Aug 13 01:07:48.008412 containerd[1478]: time="2025-08-13T01:07:48.007856521Z" level=info msg="TearDown network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\" successfully" Aug 13 01:07:48.008412 containerd[1478]: time="2025-08-13T01:07:48.007870281Z" level=info msg="StopPodSandbox for \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\" returns successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.008939481Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\"" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.008964611Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\"" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009024401Z" level=info msg="TearDown network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009035031Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" returns successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009036731Z" level=info msg="TearDown network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009068722Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" returns successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009093472Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\"" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009107762Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\"" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009150872Z" level=info msg="TearDown network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009159762Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" returns successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009166502Z" level=info msg="TearDown network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009174172Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" returns successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009058402Z" level=info msg="StopPodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\"" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009248062Z" level=info msg="TearDown network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" successfully" Aug 13 01:07:48.009300 containerd[1478]: time="2025-08-13T01:07:48.009256812Z" level=info msg="StopPodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" returns successfully" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011296063Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011403373Z" level=info msg="TearDown network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" successfully" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011418343Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" returns successfully" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011469013Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\"" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011533733Z" level=info msg="TearDown network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" successfully" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011732543Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" returns successfully" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011782333Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011837973Z" level=info msg="TearDown network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" successfully" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011846073Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" returns successfully" Aug 13 01:07:48.011979 containerd[1478]: time="2025-08-13T01:07:48.011874423Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" Aug 13 01:07:48.012419 containerd[1478]: time="2025-08-13T01:07:48.012057253Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" Aug 13 01:07:48.012419 containerd[1478]: time="2025-08-13T01:07:48.012130453Z" level=info msg="TearDown network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" successfully" Aug 13 01:07:48.012419 containerd[1478]: time="2025-08-13T01:07:48.012141383Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" returns successfully" Aug 13 01:07:48.013262 containerd[1478]: time="2025-08-13T01:07:48.012919473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:4,}" Aug 13 01:07:48.014187 containerd[1478]: time="2025-08-13T01:07:48.013667224Z" level=info msg="TearDown network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" successfully" Aug 13 01:07:48.014187 containerd[1478]: time="2025-08-13T01:07:48.013686164Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" returns successfully" Aug 13 01:07:48.014187 containerd[1478]: time="2025-08-13T01:07:48.013718364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:4,}" Aug 13 01:07:48.014187 containerd[1478]: time="2025-08-13T01:07:48.013939074Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" Aug 13 01:07:48.014187 containerd[1478]: time="2025-08-13T01:07:48.014009214Z" level=info msg="TearDown network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" successfully" Aug 13 01:07:48.014187 containerd[1478]: time="2025-08-13T01:07:48.014018934Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" returns successfully" Aug 13 01:07:48.014187 containerd[1478]: time="2025-08-13T01:07:48.014020364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:4,}" Aug 13 01:07:48.014187 containerd[1478]: time="2025-08-13T01:07:48.014070874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:4,}" Aug 13 01:07:48.016501 containerd[1478]: time="2025-08-13T01:07:48.016211635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:5,}" Aug 13 01:07:48.218423 systemd[1]: run-netns-cni\x2d611613f0\x2d2944\x2d7d7e\x2d2804\x2dfb9e1086a746.mount: Deactivated successfully. Aug 13 01:07:48.218523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34acb188152cb419a04f89bc8d83faab41d9662d9f1bf87aee93dc00bab43bbe-shm.mount: Deactivated successfully. Aug 13 01:07:48.218598 systemd[1]: run-netns-cni\x2db81eaaed\x2d7f83\x2d89a8\x2dd765\x2d7147a8cef29c.mount: Deactivated successfully. Aug 13 01:07:48.297414 systemd-networkd[1393]: cali4fa5ae18eea: Link UP Aug 13 01:07:48.299543 systemd-networkd[1393]: cali4fa5ae18eea: Gained carrier Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.146 [INFO][3324] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.166 [INFO][3324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.133.100-k8s-csi--node--driver--bc77x-eth0 csi-node-driver- calico-system f5ba40c2-4d45-4179-8c4f-7fe837c00595 5951 0 2025-08-13 01:07:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 192.168.133.100 csi-node-driver-bc77x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4fa5ae18eea [] [] }} ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Namespace="calico-system" Pod="csi-node-driver-bc77x" WorkloadEndpoint="192.168.133.100-k8s-csi--node--driver--bc77x-" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.166 [INFO][3324] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Namespace="calico-system" Pod="csi-node-driver-bc77x" WorkloadEndpoint="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.225 [INFO][3374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" HandleID="k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Workload="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.225 [INFO][3374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" HandleID="k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Workload="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.133.100", "pod":"csi-node-driver-bc77x", "timestamp":"2025-08-13 01:07:48.22569659 +0000 UTC"}, Hostname:"192.168.133.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.226 [INFO][3374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.227 [INFO][3374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.228 [INFO][3374] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.133.100' Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.246 [INFO][3374] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.253 [INFO][3374] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.264 [INFO][3374] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.266 [INFO][3374] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.268 [INFO][3374] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.268 [INFO][3374] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.270 [INFO][3374] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68 Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.275 [INFO][3374] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.282 [INFO][3374] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.130/26] block=192.168.23.128/26 handle="k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.282 [INFO][3374] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.130/26] handle="k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" host="192.168.133.100" Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.282 [INFO][3374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.327993 containerd[1478]: 2025-08-13 01:07:48.282 [INFO][3374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.130/26] IPv6=[] ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" HandleID="k8s-pod-network.3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Workload="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:48.328612 containerd[1478]: 2025-08-13 01:07:48.289 [INFO][3324] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Namespace="calico-system" Pod="csi-node-driver-bc77x" WorkloadEndpoint="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-csi--node--driver--bc77x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f5ba40c2-4d45-4179-8c4f-7fe837c00595", ResourceVersion:"5951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"", Pod:"csi-node-driver-bc77x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4fa5ae18eea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.328612 containerd[1478]: 2025-08-13 01:07:48.289 [INFO][3324] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.130/32] ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Namespace="calico-system" Pod="csi-node-driver-bc77x" WorkloadEndpoint="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:48.328612 containerd[1478]: 2025-08-13 01:07:48.289 [INFO][3324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fa5ae18eea ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Namespace="calico-system" Pod="csi-node-driver-bc77x" WorkloadEndpoint="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:48.328612 containerd[1478]: 2025-08-13 01:07:48.301 [INFO][3324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Namespace="calico-system" Pod="csi-node-driver-bc77x" WorkloadEndpoint="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:48.328612 containerd[1478]: 2025-08-13 01:07:48.303 [INFO][3324] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Namespace="calico-system" Pod="csi-node-driver-bc77x" WorkloadEndpoint="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-csi--node--driver--bc77x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f5ba40c2-4d45-4179-8c4f-7fe837c00595", ResourceVersion:"5951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 7, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68", Pod:"csi-node-driver-bc77x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4fa5ae18eea", MAC:"4e:00:ec:db:08:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.328612 containerd[1478]: 2025-08-13 01:07:48.322 [INFO][3324] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68" Namespace="calico-system" Pod="csi-node-driver-bc77x" WorkloadEndpoint="192.168.133.100-k8s-csi--node--driver--bc77x-eth0" Aug 13 01:07:48.381584 containerd[1478]: time="2025-08-13T01:07:48.380270527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:48.381584 containerd[1478]: time="2025-08-13T01:07:48.380322157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:48.381584 containerd[1478]: time="2025-08-13T01:07:48.380335197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.381584 containerd[1478]: time="2025-08-13T01:07:48.380435097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.402206 systemd-networkd[1393]: caliaef0fc32422: Link UP Aug 13 01:07:48.403607 systemd-networkd[1393]: caliaef0fc32422: Gained carrier Aug 13 01:07:48.429592 systemd[1]: run-containerd-runc-k8s.io-3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68-runc.vkwPBp.mount: Deactivated successfully. Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.111 [INFO][3306] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.143 [INFO][3306] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0 calico-apiserver-67866967cc- calico-apiserver 5bf0817c-2d50-4b1c-b228-be18c278aa6e 5954 0 2025-08-13 01:06:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67866967cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 192.168.133.100 calico-apiserver-67866967cc-xcncr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaef0fc32422 [] [] }} ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-xcncr" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.143 [INFO][3306] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-xcncr" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.244 [INFO][3359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.244 [INFO][3359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"192.168.133.100", "pod":"calico-apiserver-67866967cc-xcncr", "timestamp":"2025-08-13 01:07:48.244306729 +0000 UTC"}, Hostname:"192.168.133.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.245 [INFO][3359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.282 [INFO][3359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.283 [INFO][3359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.133.100' Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.346 [INFO][3359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.354 [INFO][3359] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.359 [INFO][3359] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.362 [INFO][3359] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.365 [INFO][3359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.365 [INFO][3359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.366 [INFO][3359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02 Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.371 [INFO][3359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.380 [INFO][3359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.131/26] block=192.168.23.128/26 handle="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.380 [INFO][3359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.131/26] handle="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" host="192.168.133.100" Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.380 [INFO][3359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.437328 containerd[1478]: 2025-08-13 01:07:48.380 [INFO][3359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.131/26] IPv6=[] ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:48.437781 containerd[1478]: 2025-08-13 01:07:48.387 [INFO][3306] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-xcncr" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0", GenerateName:"calico-apiserver-67866967cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bf0817c-2d50-4b1c-b228-be18c278aa6e", ResourceVersion:"5954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 6, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67866967cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"", Pod:"calico-apiserver-67866967cc-xcncr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaef0fc32422", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.437781 containerd[1478]: 2025-08-13 01:07:48.387 [INFO][3306] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.131/32] ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-xcncr" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:48.437781 containerd[1478]: 2025-08-13 01:07:48.387 [INFO][3306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaef0fc32422 ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-xcncr" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:48.437781 containerd[1478]: 2025-08-13 01:07:48.404 [INFO][3306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-xcncr" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:48.437781 containerd[1478]: 2025-08-13 01:07:48.405 [INFO][3306] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-xcncr" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0", GenerateName:"calico-apiserver-67866967cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bf0817c-2d50-4b1c-b228-be18c278aa6e", ResourceVersion:"5954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 6, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67866967cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02", Pod:"calico-apiserver-67866967cc-xcncr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaef0fc32422", MAC:"8e:56:fb:37:20:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.437781 containerd[1478]: 2025-08-13 01:07:48.431 [INFO][3306] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-xcncr" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:48.442255 systemd[1]: Started cri-containerd-3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68.scope - libcontainer container 3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68. Aug 13 01:07:48.481963 containerd[1478]: time="2025-08-13T01:07:48.481212407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:48.481963 containerd[1478]: time="2025-08-13T01:07:48.481555368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:48.481963 containerd[1478]: time="2025-08-13T01:07:48.481572398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.482256 containerd[1478]: time="2025-08-13T01:07:48.482110548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.516820 containerd[1478]: time="2025-08-13T01:07:48.516790525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc77x,Uid:f5ba40c2-4d45-4179-8c4f-7fe837c00595,Namespace:calico-system,Attempt:4,} returns sandbox id \"3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68\"" Aug 13 01:07:48.520146 systemd-networkd[1393]: calic455ddf6d00: Link UP Aug 13 01:07:48.522851 systemd-networkd[1393]: calic455ddf6d00: Gained carrier Aug 13 01:07:48.547060 systemd[1]: Started cri-containerd-d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02.scope - libcontainer container d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02. Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.132 [INFO][3295] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.149 [INFO][3295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0 goldmane-768f4c5c69- calico-system c98841f2-352f-43ac-b754-01bf12142833 5953 0 2025-08-13 01:06:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 192.168.133.100 goldmane-768f4c5c69-x4s2d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic455ddf6d00 [] [] }} ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Namespace="calico-system" Pod="goldmane-768f4c5c69-x4s2d" WorkloadEndpoint="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.150 [INFO][3295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Namespace="calico-system" Pod="goldmane-768f4c5c69-x4s2d" WorkloadEndpoint="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.246 [INFO][3361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.246 [INFO][3361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59d0), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.133.100", "pod":"goldmane-768f4c5c69-x4s2d", "timestamp":"2025-08-13 01:07:48.24661484 +0000 UTC"}, Hostname:"192.168.133.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.247 [INFO][3361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.381 [INFO][3361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.381 [INFO][3361] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.133.100' Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.446 [INFO][3361] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.456 [INFO][3361] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.462 [INFO][3361] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.463 [INFO][3361] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.467 [INFO][3361] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.467 [INFO][3361] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.469 [INFO][3361] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881 Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.477 [INFO][3361] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.490 [INFO][3361] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.132/26] block=192.168.23.128/26 handle="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.490 [INFO][3361] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.132/26] handle="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" host="192.168.133.100" Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.490 [INFO][3361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.565537 containerd[1478]: 2025-08-13 01:07:48.490 [INFO][3361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.132/26] IPv6=[] ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:48.566861 containerd[1478]: 2025-08-13 01:07:48.500 [INFO][3295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Namespace="calico-system" Pod="goldmane-768f4c5c69-x4s2d" WorkloadEndpoint="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c98841f2-352f-43ac-b754-01bf12142833", ResourceVersion:"5953", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"", Pod:"goldmane-768f4c5c69-x4s2d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic455ddf6d00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.566861 containerd[1478]: 2025-08-13 01:07:48.503 [INFO][3295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.132/32] ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Namespace="calico-system" Pod="goldmane-768f4c5c69-x4s2d" WorkloadEndpoint="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:48.566861 containerd[1478]: 2025-08-13 01:07:48.503 [INFO][3295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic455ddf6d00 ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Namespace="calico-system" Pod="goldmane-768f4c5c69-x4s2d" WorkloadEndpoint="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:48.566861 containerd[1478]: 2025-08-13 01:07:48.530 [INFO][3295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Namespace="calico-system" Pod="goldmane-768f4c5c69-x4s2d" WorkloadEndpoint="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:48.566861 containerd[1478]: 2025-08-13 01:07:48.533 [INFO][3295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Namespace="calico-system" Pod="goldmane-768f4c5c69-x4s2d" WorkloadEndpoint="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c98841f2-352f-43ac-b754-01bf12142833", ResourceVersion:"5953", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881", Pod:"goldmane-768f4c5c69-x4s2d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic455ddf6d00", MAC:"ee:29:99:d1:76:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.566861 containerd[1478]: 2025-08-13 01:07:48.554 [INFO][3295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Namespace="calico-system" Pod="goldmane-768f4c5c69-x4s2d" WorkloadEndpoint="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:48.619370 systemd-networkd[1393]: cali1a3df9e0403: Gained IPv6LL Aug 13 01:07:48.636472 systemd-networkd[1393]: calif57c8335e0f: Link UP Aug 13 01:07:48.637623 systemd-networkd[1393]: calif57c8335e0f: Gained carrier Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.138 [INFO][3335] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.165 [INFO][3335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0 whisker-65ffbb4b4d- calico-system 61ce1787-bb8a-413c-8736-b5b6cbd4da1d 5873 0 2025-08-13 01:06:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65ffbb4b4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 192.168.133.100 whisker-65ffbb4b4d-js9cm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif57c8335e0f [] [] }} ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Namespace="calico-system" Pod="whisker-65ffbb4b4d-js9cm" WorkloadEndpoint="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.166 [INFO][3335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Namespace="calico-system" Pod="whisker-65ffbb4b4d-js9cm" WorkloadEndpoint="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.251 [INFO][3376] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.251 [INFO][3376] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f770), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.133.100", "pod":"whisker-65ffbb4b4d-js9cm", "timestamp":"2025-08-13 01:07:48.251201182 +0000 UTC"}, Hostname:"192.168.133.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.251 [INFO][3376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.495 [INFO][3376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.495 [INFO][3376] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.133.100' Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.545 [INFO][3376] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.558 [INFO][3376] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.565 [INFO][3376] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.566 [INFO][3376] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.569 [INFO][3376] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.569 [INFO][3376] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.571 [INFO][3376] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1 Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.577 [INFO][3376] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.586 [INFO][3376] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.133/26] block=192.168.23.128/26 handle="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.589 [INFO][3376] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.133/26] handle="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" host="192.168.133.100" Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.589 [INFO][3376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.678829 containerd[1478]: 2025-08-13 01:07:48.589 [INFO][3376] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.133/26] IPv6=[] ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:48.679635 containerd[1478]: 2025-08-13 01:07:48.607 [INFO][3335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Namespace="calico-system" Pod="whisker-65ffbb4b4d-js9cm" WorkloadEndpoint="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0", GenerateName:"whisker-65ffbb4b4d-", Namespace:"calico-system", SelfLink:"", UID:"61ce1787-bb8a-413c-8736-b5b6cbd4da1d", ResourceVersion:"5873", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65ffbb4b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"", Pod:"whisker-65ffbb4b4d-js9cm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif57c8335e0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.679635 containerd[1478]: 2025-08-13 01:07:48.608 [INFO][3335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.133/32] ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Namespace="calico-system" Pod="whisker-65ffbb4b4d-js9cm" WorkloadEndpoint="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:48.679635 containerd[1478]: 2025-08-13 01:07:48.608 [INFO][3335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif57c8335e0f ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Namespace="calico-system" Pod="whisker-65ffbb4b4d-js9cm" WorkloadEndpoint="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:48.679635 containerd[1478]: 2025-08-13 01:07:48.642 [INFO][3335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Namespace="calico-system" Pod="whisker-65ffbb4b4d-js9cm" WorkloadEndpoint="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:48.679635 containerd[1478]: 2025-08-13 01:07:48.644 [INFO][3335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Namespace="calico-system" Pod="whisker-65ffbb4b4d-js9cm" WorkloadEndpoint="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0", GenerateName:"whisker-65ffbb4b4d-", Namespace:"calico-system", SelfLink:"", UID:"61ce1787-bb8a-413c-8736-b5b6cbd4da1d", ResourceVersion:"5873", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65ffbb4b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1", Pod:"whisker-65ffbb4b4d-js9cm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif57c8335e0f", MAC:"d6:9e:71:84:cf:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.679635 containerd[1478]: 2025-08-13 01:07:48.666 [INFO][3335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Namespace="calico-system" Pod="whisker-65ffbb4b4d-js9cm" WorkloadEndpoint="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:48.700661 containerd[1478]: time="2025-08-13T01:07:48.696876795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-xcncr,Uid:5bf0817c-2d50-4b1c-b228-be18c278aa6e,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\"" Aug 13 01:07:48.714298 containerd[1478]: time="2025-08-13T01:07:48.709867202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:48.714298 containerd[1478]: time="2025-08-13T01:07:48.709940682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:48.714298 containerd[1478]: time="2025-08-13T01:07:48.709954622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.714298 containerd[1478]: time="2025-08-13T01:07:48.710087352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.751038 systemd[1]: Started cri-containerd-f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881.scope - libcontainer container f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881. Aug 13 01:07:48.768182 kubelet[1833]: E0813 01:07:48.768158 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:48.789445 systemd-networkd[1393]: calif333340ed71: Link UP Aug 13 01:07:48.789640 systemd-networkd[1393]: calif333340ed71: Gained carrier Aug 13 01:07:48.826993 containerd[1478]: time="2025-08-13T01:07:48.825387059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:48.826993 containerd[1478]: time="2025-08-13T01:07:48.825633850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:48.826993 containerd[1478]: time="2025-08-13T01:07:48.825647040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.831318 containerd[1478]: time="2025-08-13T01:07:48.830934452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.125 [INFO][3317] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.142 [INFO][3317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0 calico-apiserver-67866967cc- calico-apiserver 6b7cf428-2808-4afb-aea8-f874628caa6c 5952 0 2025-08-13 01:06:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67866967cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 192.168.133.100 calico-apiserver-67866967cc-2lw7j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif333340ed71 [] [] }} ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-2lw7j" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.142 [INFO][3317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-2lw7j" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.260 [INFO][3363] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.261 [INFO][3363] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ed9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"192.168.133.100", "pod":"calico-apiserver-67866967cc-2lw7j", "timestamp":"2025-08-13 01:07:48.260456857 +0000 UTC"}, Hostname:"192.168.133.100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.261 [INFO][3363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.589 [INFO][3363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.589 [INFO][3363] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.133.100' Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.649 [INFO][3363] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.674 [INFO][3363] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.690 [INFO][3363] ipam/ipam.go 511: Trying affinity for 192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.697 [INFO][3363] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.702 [INFO][3363] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.705 [INFO][3363] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.712 [INFO][3363] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6 Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.722 [INFO][3363] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.736 [INFO][3363] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.134/26] block=192.168.23.128/26 handle="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.736 [INFO][3363] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.134/26] handle="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" host="192.168.133.100" Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.736 [INFO][3363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:48.833012 containerd[1478]: 2025-08-13 01:07:48.736 [INFO][3363] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.134/26] IPv6=[] ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:48.833448 containerd[1478]: 2025-08-13 01:07:48.764 [INFO][3317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-2lw7j" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0", GenerateName:"calico-apiserver-67866967cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b7cf428-2808-4afb-aea8-f874628caa6c", ResourceVersion:"5952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67866967cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"", Pod:"calico-apiserver-67866967cc-2lw7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif333340ed71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.833448 containerd[1478]: 2025-08-13 01:07:48.765 [INFO][3317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.134/32] ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-2lw7j" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:48.833448 containerd[1478]: 2025-08-13 01:07:48.766 [INFO][3317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif333340ed71 ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-2lw7j" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:48.833448 containerd[1478]: 2025-08-13 01:07:48.786 [INFO][3317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-2lw7j" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:48.833448 containerd[1478]: 2025-08-13 01:07:48.787 [INFO][3317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-2lw7j" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0", GenerateName:"calico-apiserver-67866967cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b7cf428-2808-4afb-aea8-f874628caa6c", ResourceVersion:"5952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67866967cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.133.100", ContainerID:"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6", Pod:"calico-apiserver-67866967cc-2lw7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif333340ed71", MAC:"e6:d0:a3:76:38:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:07:48.833448 containerd[1478]: 2025-08-13 01:07:48.815 [INFO][3317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Namespace="calico-apiserver" Pod="calico-apiserver-67866967cc-2lw7j" WorkloadEndpoint="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:07:48.873025 systemd[1]: Started cri-containerd-44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1.scope - libcontainer container 44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1. Aug 13 01:07:48.885587 containerd[1478]: time="2025-08-13T01:07:48.885255849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-x4s2d,Uid:c98841f2-352f-43ac-b754-01bf12142833,Namespace:calico-system,Attempt:4,} returns sandbox id \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\"" Aug 13 01:07:48.915711 containerd[1478]: time="2025-08-13T01:07:48.915204944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:07:48.915711 containerd[1478]: time="2025-08-13T01:07:48.915289174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:07:48.915711 containerd[1478]: time="2025-08-13T01:07:48.915306344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.915711 containerd[1478]: time="2025-08-13T01:07:48.915435354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:07:48.961026 systemd[1]: Started cri-containerd-56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6.scope - libcontainer container 56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6. Aug 13 01:07:49.034960 containerd[1478]: time="2025-08-13T01:07:49.034932584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65ffbb4b4d-js9cm,Uid:61ce1787-bb8a-413c-8736-b5b6cbd4da1d,Namespace:calico-system,Attempt:5,} returns sandbox id \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\"" Aug 13 01:07:49.104021 containerd[1478]: time="2025-08-13T01:07:49.103491248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67866967cc-2lw7j,Uid:6b7cf428-2808-4afb-aea8-f874628caa6c,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\"" Aug 13 01:07:49.240046 kernel: bpftool[3753]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 01:07:49.505622 systemd-networkd[1393]: vxlan.calico: Link UP Aug 13 01:07:49.505631 systemd-networkd[1393]: vxlan.calico: Gained carrier Aug 13 01:07:49.643055 systemd-networkd[1393]: caliaef0fc32422: Gained IPv6LL Aug 13 01:07:49.644088 systemd-networkd[1393]: calic455ddf6d00: Gained IPv6LL Aug 13 01:07:49.769239 kubelet[1833]: E0813 01:07:49.769143 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:49.899040 systemd-networkd[1393]: cali4fa5ae18eea: Gained IPv6LL Aug 13 01:07:50.029167 systemd-networkd[1393]: calif333340ed71: Gained IPv6LL Aug 13 01:07:50.184718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634178449.mount: Deactivated successfully. Aug 13 01:07:50.476146 systemd-networkd[1393]: calif57c8335e0f: Gained IPv6LL Aug 13 01:07:50.769821 kubelet[1833]: E0813 01:07:50.769705 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:50.795038 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Aug 13 01:07:51.184213 containerd[1478]: time="2025-08-13T01:07:51.184161368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:51.185334 containerd[1478]: time="2025-08-13T01:07:51.185276309Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73303204" Aug 13 01:07:51.186087 containerd[1478]: time="2025-08-13T01:07:51.186037229Z" level=info msg="ImageCreate event name:\"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:51.189026 containerd[1478]: time="2025-08-13T01:07:51.188997290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:51.189971 containerd[1478]: time="2025-08-13T01:07:51.189945531Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 3.549199074s" Aug 13 01:07:51.190036 containerd[1478]: time="2025-08-13T01:07:51.189976081Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 13 01:07:51.192047 containerd[1478]: time="2025-08-13T01:07:51.192021022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:07:51.193790 containerd[1478]: time="2025-08-13T01:07:51.193766113Z" level=info msg="CreateContainer within sandbox \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Aug 13 01:07:51.203346 containerd[1478]: time="2025-08-13T01:07:51.203289558Z" level=info msg="CreateContainer within sandbox \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\"" Aug 13 01:07:51.205076 containerd[1478]: time="2025-08-13T01:07:51.205044718Z" level=info msg="StartContainer for \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\"" Aug 13 01:07:51.235475 systemd[1]: run-containerd-runc-k8s.io-153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32-runc.oTcWrN.mount: Deactivated successfully. Aug 13 01:07:51.247032 systemd[1]: Started cri-containerd-153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32.scope - libcontainer container 153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32. Aug 13 01:07:51.278288 containerd[1478]: time="2025-08-13T01:07:51.278251735Z" level=info msg="StartContainer for \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\" returns successfully" Aug 13 01:07:51.340456 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:07:51.770411 kubelet[1833]: E0813 01:07:51.770364 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:51.843181 containerd[1478]: time="2025-08-13T01:07:51.843125317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:51.844071 containerd[1478]: time="2025-08-13T01:07:51.844033058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:07:51.844616 containerd[1478]: time="2025-08-13T01:07:51.844579798Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:51.846754 containerd[1478]: time="2025-08-13T01:07:51.846100049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:51.846754 containerd[1478]: time="2025-08-13T01:07:51.846658899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 654.611527ms" Aug 13 01:07:51.846754 containerd[1478]: time="2025-08-13T01:07:51.846680559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:07:51.848127 containerd[1478]: time="2025-08-13T01:07:51.848103380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 01:07:51.849911 containerd[1478]: time="2025-08-13T01:07:51.849865501Z" level=info msg="CreateContainer within sandbox \"3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:07:51.862843 containerd[1478]: time="2025-08-13T01:07:51.862816277Z" level=info msg="CreateContainer within sandbox \"3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"65f8ea2d5df535e02ce7572441765314fd46a74e5a842aae0e7023b3fc500eaa\"" Aug 13 01:07:51.863390 containerd[1478]: time="2025-08-13T01:07:51.863368577Z" level=info msg="StartContainer for \"65f8ea2d5df535e02ce7572441765314fd46a74e5a842aae0e7023b3fc500eaa\"" Aug 13 01:07:51.891040 systemd[1]: Started cri-containerd-65f8ea2d5df535e02ce7572441765314fd46a74e5a842aae0e7023b3fc500eaa.scope - libcontainer container 65f8ea2d5df535e02ce7572441765314fd46a74e5a842aae0e7023b3fc500eaa. Aug 13 01:07:51.922735 containerd[1478]: time="2025-08-13T01:07:51.922699057Z" level=info msg="StartContainer for \"65f8ea2d5df535e02ce7572441765314fd46a74e5a842aae0e7023b3fc500eaa\" returns successfully" Aug 13 01:07:52.754230 kubelet[1833]: E0813 01:07:52.754187 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:52.770830 kubelet[1833]: E0813 01:07:52.770496 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:52.911656 kubelet[1833]: I0813 01:07:52.911614 1833 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:52.911821 kubelet[1833]: I0813 01:07:52.911728 1833 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:07:52.914776 containerd[1478]: time="2025-08-13T01:07:52.914593673Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" Aug 13 01:07:52.915349 containerd[1478]: time="2025-08-13T01:07:52.914957893Z" level=info msg="TearDown network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" successfully" Aug 13 01:07:52.915349 containerd[1478]: time="2025-08-13T01:07:52.914971683Z" level=info msg="StopPodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" returns successfully" Aug 13 01:07:52.915349 containerd[1478]: time="2025-08-13T01:07:52.915309373Z" level=info msg="RemovePodSandbox for \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" Aug 13 01:07:52.915349 containerd[1478]: time="2025-08-13T01:07:52.915327533Z" level=info msg="Forcibly stopping sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\"" Aug 13 01:07:52.915448 containerd[1478]: time="2025-08-13T01:07:52.915386833Z" level=info msg="TearDown network for sandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" successfully" Aug 13 01:07:52.919732 containerd[1478]: time="2025-08-13T01:07:52.919605515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.919732 containerd[1478]: time="2025-08-13T01:07:52.919648245Z" level=info msg="RemovePodSandbox \"d3c75924d1966baa9202ea754a07dfddfffeace3eee39f5fbbe47efb6a79e940\" returns successfully" Aug 13 01:07:52.920164 containerd[1478]: time="2025-08-13T01:07:52.920028565Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\"" Aug 13 01:07:52.920164 containerd[1478]: time="2025-08-13T01:07:52.920111185Z" level=info msg="TearDown network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" successfully" Aug 13 01:07:52.920164 containerd[1478]: time="2025-08-13T01:07:52.920121485Z" level=info msg="StopPodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" returns successfully" Aug 13 01:07:52.920680 containerd[1478]: time="2025-08-13T01:07:52.920663546Z" level=info msg="RemovePodSandbox for \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\"" Aug 13 01:07:52.920769 containerd[1478]: time="2025-08-13T01:07:52.920754306Z" level=info msg="Forcibly stopping sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\"" Aug 13 01:07:52.920920 containerd[1478]: time="2025-08-13T01:07:52.920864846Z" level=info msg="TearDown network for sandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" successfully" Aug 13 01:07:52.923761 containerd[1478]: time="2025-08-13T01:07:52.923661787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.923761 containerd[1478]: time="2025-08-13T01:07:52.923687997Z" level=info msg="RemovePodSandbox \"1fceb84579fe53c28b473a56ebdf6d4e41a8766a77991d02900d4023b0b0449c\" returns successfully" Aug 13 01:07:52.923967 containerd[1478]: time="2025-08-13T01:07:52.923951397Z" level=info msg="StopPodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\"" Aug 13 01:07:52.924076 containerd[1478]: time="2025-08-13T01:07:52.924061637Z" level=info msg="TearDown network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" successfully" Aug 13 01:07:52.924473 containerd[1478]: time="2025-08-13T01:07:52.924448388Z" level=info msg="StopPodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" returns successfully" Aug 13 01:07:52.924839 containerd[1478]: time="2025-08-13T01:07:52.924780548Z" level=info msg="RemovePodSandbox for \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\"" Aug 13 01:07:52.924839 containerd[1478]: time="2025-08-13T01:07:52.924803208Z" level=info msg="Forcibly stopping sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\"" Aug 13 01:07:52.924921 containerd[1478]: time="2025-08-13T01:07:52.924866668Z" level=info msg="TearDown network for sandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" successfully" Aug 13 01:07:52.926814 containerd[1478]: time="2025-08-13T01:07:52.926770149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.926814 containerd[1478]: time="2025-08-13T01:07:52.926810939Z" level=info msg="RemovePodSandbox \"e8d9b77c8bc90c32906e386fe2fcc0ae070c4231c01693a8260c554b24a7ccbf\" returns successfully" Aug 13 01:07:52.927149 containerd[1478]: time="2025-08-13T01:07:52.927119279Z" level=info msg="StopPodSandbox for \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\"" Aug 13 01:07:52.927229 containerd[1478]: time="2025-08-13T01:07:52.927193909Z" level=info msg="TearDown network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\" successfully" Aug 13 01:07:52.927229 containerd[1478]: time="2025-08-13T01:07:52.927210059Z" level=info msg="StopPodSandbox for \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\" returns successfully" Aug 13 01:07:52.927580 containerd[1478]: time="2025-08-13T01:07:52.927556619Z" level=info msg="RemovePodSandbox for \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\"" Aug 13 01:07:52.927616 containerd[1478]: time="2025-08-13T01:07:52.927580619Z" level=info msg="Forcibly stopping sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\"" Aug 13 01:07:52.927721 containerd[1478]: time="2025-08-13T01:07:52.927641269Z" level=info msg="TearDown network for sandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\" successfully" Aug 13 01:07:52.930162 containerd[1478]: time="2025-08-13T01:07:52.930133350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.930207 containerd[1478]: time="2025-08-13T01:07:52.930166510Z" level=info msg="RemovePodSandbox \"d7fb342a884284c69bc5f2d4cb88342be81d91030b7c126854e24d98c9e6b1a2\" returns successfully" Aug 13 01:07:52.930619 containerd[1478]: time="2025-08-13T01:07:52.930462941Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" Aug 13 01:07:52.930619 containerd[1478]: time="2025-08-13T01:07:52.930593971Z" level=info msg="TearDown network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" successfully" Aug 13 01:07:52.930619 containerd[1478]: time="2025-08-13T01:07:52.930603511Z" level=info msg="StopPodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" returns successfully" Aug 13 01:07:52.930913 containerd[1478]: time="2025-08-13T01:07:52.930866431Z" level=info msg="RemovePodSandbox for \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" Aug 13 01:07:52.930973 containerd[1478]: time="2025-08-13T01:07:52.930886061Z" level=info msg="Forcibly stopping sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\"" Aug 13 01:07:52.931179 containerd[1478]: time="2025-08-13T01:07:52.931022081Z" level=info msg="TearDown network for sandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" successfully" Aug 13 01:07:52.933363 containerd[1478]: time="2025-08-13T01:07:52.933327382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.933363 containerd[1478]: time="2025-08-13T01:07:52.933356532Z" level=info msg="RemovePodSandbox \"b7a5fbdc938dc4951fc94e9367a4e7882391ee872e05739a80a5a7f7921d6016\" returns successfully" Aug 13 01:07:52.933605 containerd[1478]: time="2025-08-13T01:07:52.933573652Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\"" Aug 13 01:07:52.933662 containerd[1478]: time="2025-08-13T01:07:52.933643692Z" level=info msg="TearDown network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" successfully" Aug 13 01:07:52.933662 containerd[1478]: time="2025-08-13T01:07:52.933659812Z" level=info msg="StopPodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" returns successfully" Aug 13 01:07:52.934038 containerd[1478]: time="2025-08-13T01:07:52.934017732Z" level=info msg="RemovePodSandbox for \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\"" Aug 13 01:07:52.934358 containerd[1478]: time="2025-08-13T01:07:52.934240422Z" level=info msg="Forcibly stopping sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\"" Aug 13 01:07:52.934358 containerd[1478]: time="2025-08-13T01:07:52.934307002Z" level=info msg="TearDown network for sandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" successfully" Aug 13 01:07:52.936453 containerd[1478]: time="2025-08-13T01:07:52.936427223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.936591 containerd[1478]: time="2025-08-13T01:07:52.936567614Z" level=info msg="RemovePodSandbox \"316eb6525ae0da1f9f4ffda3fb7a4a8e5cd40349490a359e04d938baf87de0e0\" returns successfully" Aug 13 01:07:52.937106 containerd[1478]: time="2025-08-13T01:07:52.937086744Z" level=info msg="StopPodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\"" Aug 13 01:07:52.937172 containerd[1478]: time="2025-08-13T01:07:52.937155584Z" level=info msg="TearDown network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" successfully" Aug 13 01:07:52.937172 containerd[1478]: time="2025-08-13T01:07:52.937169784Z" level=info msg="StopPodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" returns successfully" Aug 13 01:07:52.937533 containerd[1478]: time="2025-08-13T01:07:52.937498114Z" level=info msg="RemovePodSandbox for \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\"" Aug 13 01:07:52.937533 containerd[1478]: time="2025-08-13T01:07:52.937517224Z" level=info msg="Forcibly stopping sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\"" Aug 13 01:07:52.937591 containerd[1478]: time="2025-08-13T01:07:52.937570284Z" level=info msg="TearDown network for sandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" successfully" Aug 13 01:07:52.942851 containerd[1478]: time="2025-08-13T01:07:52.942756117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.942851 containerd[1478]: time="2025-08-13T01:07:52.942788277Z" level=info msg="RemovePodSandbox \"937b85a321b043412f1ae3e30df50c07aa040a827e4018e5fa2dde56e891b9eb\" returns successfully" Aug 13 01:07:52.943142 containerd[1478]: time="2025-08-13T01:07:52.943086677Z" level=info msg="StopPodSandbox for \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\"" Aug 13 01:07:52.943294 containerd[1478]: time="2025-08-13T01:07:52.943163197Z" level=info msg="TearDown network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\" successfully" Aug 13 01:07:52.943294 containerd[1478]: time="2025-08-13T01:07:52.943173207Z" level=info msg="StopPodSandbox for \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\" returns successfully" Aug 13 01:07:52.943621 containerd[1478]: time="2025-08-13T01:07:52.943603707Z" level=info msg="RemovePodSandbox for \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\"" Aug 13 01:07:52.944753 containerd[1478]: time="2025-08-13T01:07:52.943680187Z" level=info msg="Forcibly stopping sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\"" Aug 13 01:07:52.944753 containerd[1478]: time="2025-08-13T01:07:52.943757527Z" level=info msg="TearDown network for sandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\" successfully" Aug 13 01:07:52.945808 containerd[1478]: time="2025-08-13T01:07:52.945766588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.945808 containerd[1478]: time="2025-08-13T01:07:52.945802698Z" level=info msg="RemovePodSandbox \"b6304ebd1ca06aa89cbc98d58cfe9b6924dcc66970ff86ddeb7fc5fa24380295\" returns successfully" Aug 13 01:07:52.946106 containerd[1478]: time="2025-08-13T01:07:52.946088608Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" Aug 13 01:07:52.946248 containerd[1478]: time="2025-08-13T01:07:52.946233478Z" level=info msg="TearDown network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" successfully" Aug 13 01:07:52.946312 containerd[1478]: time="2025-08-13T01:07:52.946299968Z" level=info msg="StopPodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" returns successfully" Aug 13 01:07:52.946639 containerd[1478]: time="2025-08-13T01:07:52.946606179Z" level=info msg="RemovePodSandbox for \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" Aug 13 01:07:52.946639 containerd[1478]: time="2025-08-13T01:07:52.946630769Z" level=info msg="Forcibly stopping sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\"" Aug 13 01:07:52.946727 containerd[1478]: time="2025-08-13T01:07:52.946693269Z" level=info msg="TearDown network for sandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" successfully" Aug 13 01:07:52.948727 containerd[1478]: time="2025-08-13T01:07:52.948708920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.948935 containerd[1478]: time="2025-08-13T01:07:52.948920200Z" level=info msg="RemovePodSandbox \"f46b9e128e9c250ec340f38d8b6229c7adb90b2a9eeabccac253d1d659955bc1\" returns successfully" Aug 13 01:07:52.949300 containerd[1478]: time="2025-08-13T01:07:52.949271230Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\"" Aug 13 01:07:52.949449 containerd[1478]: time="2025-08-13T01:07:52.949347010Z" level=info msg="TearDown network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" successfully" Aug 13 01:07:52.949449 containerd[1478]: time="2025-08-13T01:07:52.949361670Z" level=info msg="StopPodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" returns successfully" Aug 13 01:07:52.949548 containerd[1478]: time="2025-08-13T01:07:52.949522550Z" level=info msg="RemovePodSandbox for \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\"" Aug 13 01:07:52.949548 containerd[1478]: time="2025-08-13T01:07:52.949545580Z" level=info msg="Forcibly stopping sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\"" Aug 13 01:07:52.949644 containerd[1478]: time="2025-08-13T01:07:52.949611330Z" level=info msg="TearDown network for sandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" successfully" Aug 13 01:07:52.952159 containerd[1478]: time="2025-08-13T01:07:52.952128661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.952217 containerd[1478]: time="2025-08-13T01:07:52.952159511Z" level=info msg="RemovePodSandbox \"aead4059792cf53bd024a3db1fcbc51a83de62440dfacef038ccf950130caec5\" returns successfully" Aug 13 01:07:52.952434 containerd[1478]: time="2025-08-13T01:07:52.952414981Z" level=info msg="StopPodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\"" Aug 13 01:07:52.952505 containerd[1478]: time="2025-08-13T01:07:52.952485462Z" level=info msg="TearDown network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" successfully" Aug 13 01:07:52.952505 containerd[1478]: time="2025-08-13T01:07:52.952501402Z" level=info msg="StopPodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" returns successfully" Aug 13 01:07:52.952863 containerd[1478]: time="2025-08-13T01:07:52.952840112Z" level=info msg="RemovePodSandbox for \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\"" Aug 13 01:07:52.952911 containerd[1478]: time="2025-08-13T01:07:52.952862382Z" level=info msg="Forcibly stopping sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\"" Aug 13 01:07:52.952974 containerd[1478]: time="2025-08-13T01:07:52.952942822Z" level=info msg="TearDown network for sandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" successfully" Aug 13 01:07:52.956334 containerd[1478]: time="2025-08-13T01:07:52.956293453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.956334 containerd[1478]: time="2025-08-13T01:07:52.956332293Z" level=info msg="RemovePodSandbox \"b95aad7cf10969b750b67fc6f7a45821bbe56f16feb55c6aed9a66104cc7c86f\" returns successfully" Aug 13 01:07:52.956757 containerd[1478]: time="2025-08-13T01:07:52.956695814Z" level=info msg="StopPodSandbox for \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\"" Aug 13 01:07:52.956940 containerd[1478]: time="2025-08-13T01:07:52.956923564Z" level=info msg="TearDown network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\" successfully" Aug 13 01:07:52.957020 containerd[1478]: time="2025-08-13T01:07:52.957007644Z" level=info msg="StopPodSandbox for \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\" returns successfully" Aug 13 01:07:52.957413 containerd[1478]: time="2025-08-13T01:07:52.957391774Z" level=info msg="RemovePodSandbox for \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\"" Aug 13 01:07:52.957446 containerd[1478]: time="2025-08-13T01:07:52.957415324Z" level=info msg="Forcibly stopping sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\"" Aug 13 01:07:52.957508 containerd[1478]: time="2025-08-13T01:07:52.957480434Z" level=info msg="TearDown network for sandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\" successfully" Aug 13 01:07:52.960200 containerd[1478]: time="2025-08-13T01:07:52.960144475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.960200 containerd[1478]: time="2025-08-13T01:07:52.960170145Z" level=info msg="RemovePodSandbox \"c684418df31be9377b0f67e84c4b755e3197d6dec3792bad7927eb66eb6b8db4\" returns successfully" Aug 13 01:07:52.960774 containerd[1478]: time="2025-08-13T01:07:52.960671846Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" Aug 13 01:07:52.961480 containerd[1478]: time="2025-08-13T01:07:52.961262986Z" level=info msg="TearDown network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" successfully" Aug 13 01:07:52.961480 containerd[1478]: time="2025-08-13T01:07:52.961317836Z" level=info msg="StopPodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" returns successfully" Aug 13 01:07:52.962419 containerd[1478]: time="2025-08-13T01:07:52.962306636Z" level=info msg="RemovePodSandbox for \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" Aug 13 01:07:52.962419 containerd[1478]: time="2025-08-13T01:07:52.962327656Z" level=info msg="Forcibly stopping sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\"" Aug 13 01:07:52.962419 containerd[1478]: time="2025-08-13T01:07:52.962403136Z" level=info msg="TearDown network for sandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" successfully" Aug 13 01:07:52.964857 containerd[1478]: time="2025-08-13T01:07:52.964787208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.964998 containerd[1478]: time="2025-08-13T01:07:52.964956018Z" level=info msg="RemovePodSandbox \"a46d9c94645435c131a24de0b1506a3670602f04815d79d5d5c70f4f945caeb7\" returns successfully" Aug 13 01:07:52.965596 containerd[1478]: time="2025-08-13T01:07:52.965570808Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\"" Aug 13 01:07:52.965687 containerd[1478]: time="2025-08-13T01:07:52.965660938Z" level=info msg="TearDown network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" successfully" Aug 13 01:07:52.965687 containerd[1478]: time="2025-08-13T01:07:52.965677988Z" level=info msg="StopPodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" returns successfully" Aug 13 01:07:52.965977 containerd[1478]: time="2025-08-13T01:07:52.965926268Z" level=info msg="RemovePodSandbox for \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\"" Aug 13 01:07:52.965977 containerd[1478]: time="2025-08-13T01:07:52.965950318Z" level=info msg="Forcibly stopping sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\"" Aug 13 01:07:52.966037 containerd[1478]: time="2025-08-13T01:07:52.966006068Z" level=info msg="TearDown network for sandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" successfully" Aug 13 01:07:52.967942 containerd[1478]: time="2025-08-13T01:07:52.967844649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.968169 containerd[1478]: time="2025-08-13T01:07:52.967938919Z" level=info msg="RemovePodSandbox \"a34ea86e1381d6ea4c24d04d8059b3a892aefa3c74d8d903ebd746df2dae11a5\" returns successfully" Aug 13 01:07:52.968420 containerd[1478]: time="2025-08-13T01:07:52.968386419Z" level=info msg="StopPodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\"" Aug 13 01:07:52.968482 containerd[1478]: time="2025-08-13T01:07:52.968460299Z" level=info msg="TearDown network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" successfully" Aug 13 01:07:52.968482 containerd[1478]: time="2025-08-13T01:07:52.968476770Z" level=info msg="StopPodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" returns successfully" Aug 13 01:07:52.968967 containerd[1478]: time="2025-08-13T01:07:52.968938480Z" level=info msg="RemovePodSandbox for \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\"" Aug 13 01:07:52.969060 containerd[1478]: time="2025-08-13T01:07:52.968962330Z" level=info msg="Forcibly stopping sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\"" Aug 13 01:07:52.969060 containerd[1478]: time="2025-08-13T01:07:52.969024010Z" level=info msg="TearDown network for sandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" successfully" Aug 13 01:07:52.970801 containerd[1478]: time="2025-08-13T01:07:52.970769061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.970852 containerd[1478]: time="2025-08-13T01:07:52.970821281Z" level=info msg="RemovePodSandbox \"47a164df865b5ad49d0566ffec291337dbfecdfe4e088cf7f7d88f63b8f101e6\" returns successfully" Aug 13 01:07:52.971249 containerd[1478]: time="2025-08-13T01:07:52.971096421Z" level=info msg="StopPodSandbox for \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\"" Aug 13 01:07:52.971249 containerd[1478]: time="2025-08-13T01:07:52.971191231Z" level=info msg="TearDown network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\" successfully" Aug 13 01:07:52.971249 containerd[1478]: time="2025-08-13T01:07:52.971201421Z" level=info msg="StopPodSandbox for \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\" returns successfully" Aug 13 01:07:52.972452 containerd[1478]: time="2025-08-13T01:07:52.971410941Z" level=info msg="RemovePodSandbox for \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\"" Aug 13 01:07:52.972452 containerd[1478]: time="2025-08-13T01:07:52.971431071Z" level=info msg="Forcibly stopping sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\"" Aug 13 01:07:52.972452 containerd[1478]: time="2025-08-13T01:07:52.971488901Z" level=info msg="TearDown network for sandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\" successfully" Aug 13 01:07:52.974368 containerd[1478]: time="2025-08-13T01:07:52.974348462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.974436 containerd[1478]: time="2025-08-13T01:07:52.974422712Z" level=info msg="RemovePodSandbox \"db893a5af0cace63866a4c918b310a28f0db54b03c66694f73416dc09d3fcf0e\" returns successfully" Aug 13 01:07:52.974782 containerd[1478]: time="2025-08-13T01:07:52.974721613Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" Aug 13 01:07:52.974882 containerd[1478]: time="2025-08-13T01:07:52.974859353Z" level=info msg="TearDown network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" successfully" Aug 13 01:07:52.974882 containerd[1478]: time="2025-08-13T01:07:52.974876983Z" level=info msg="StopPodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" returns successfully" Aug 13 01:07:52.975277 containerd[1478]: time="2025-08-13T01:07:52.975239873Z" level=info msg="RemovePodSandbox for \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" Aug 13 01:07:52.975277 containerd[1478]: time="2025-08-13T01:07:52.975273103Z" level=info msg="Forcibly stopping sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\"" Aug 13 01:07:52.975360 containerd[1478]: time="2025-08-13T01:07:52.975328443Z" level=info msg="TearDown network for sandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" successfully" Aug 13 01:07:52.977239 containerd[1478]: time="2025-08-13T01:07:52.977198614Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.977239 containerd[1478]: time="2025-08-13T01:07:52.977230644Z" level=info msg="RemovePodSandbox \"a2e43fa13d3a1cf8d1d13781edeeed7ebd493a30e6f11ebf1dceae6146fae548\" returns successfully" Aug 13 01:07:52.977504 containerd[1478]: time="2025-08-13T01:07:52.977478804Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\"" Aug 13 01:07:52.977741 containerd[1478]: time="2025-08-13T01:07:52.977725014Z" level=info msg="TearDown network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" successfully" Aug 13 01:07:52.977790 containerd[1478]: time="2025-08-13T01:07:52.977778304Z" level=info msg="StopPodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" returns successfully" Aug 13 01:07:52.978128 containerd[1478]: time="2025-08-13T01:07:52.978101104Z" level=info msg="RemovePodSandbox for \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\"" Aug 13 01:07:52.978128 containerd[1478]: time="2025-08-13T01:07:52.978128114Z" level=info msg="Forcibly stopping sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\"" Aug 13 01:07:52.978266 containerd[1478]: time="2025-08-13T01:07:52.978208314Z" level=info msg="TearDown network for sandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" successfully" Aug 13 01:07:52.982374 containerd[1478]: time="2025-08-13T01:07:52.982346086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.982644 containerd[1478]: time="2025-08-13T01:07:52.982379086Z" level=info msg="RemovePodSandbox \"511299aca7ebd7d4576b6302c8a682973bb611ee9c6b6339c6054603075811c0\" returns successfully" Aug 13 01:07:52.982887 containerd[1478]: time="2025-08-13T01:07:52.982829637Z" level=info msg="StopPodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\"" Aug 13 01:07:52.982947 containerd[1478]: time="2025-08-13T01:07:52.982926057Z" level=info msg="TearDown network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" successfully" Aug 13 01:07:52.982947 containerd[1478]: time="2025-08-13T01:07:52.982937307Z" level=info msg="StopPodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" returns successfully" Aug 13 01:07:52.983295 containerd[1478]: time="2025-08-13T01:07:52.983149697Z" level=info msg="RemovePodSandbox for \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\"" Aug 13 01:07:52.983327 containerd[1478]: time="2025-08-13T01:07:52.983299377Z" level=info msg="Forcibly stopping sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\"" Aug 13 01:07:52.983465 containerd[1478]: time="2025-08-13T01:07:52.983376187Z" level=info msg="TearDown network for sandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" successfully" Aug 13 01:07:52.994365 containerd[1478]: time="2025-08-13T01:07:52.994328202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:52.994915 containerd[1478]: time="2025-08-13T01:07:52.994868903Z" level=info msg="RemovePodSandbox \"de661acdf965b0f92e970b3c2aec65350a4e9146765aa0acaeb7c9515c477ee6\" returns successfully" Aug 13 01:07:52.995175 containerd[1478]: time="2025-08-13T01:07:52.995128273Z" level=info msg="StopPodSandbox for \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\"" Aug 13 01:07:52.995253 containerd[1478]: time="2025-08-13T01:07:52.995204573Z" level=info msg="TearDown network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\" successfully" Aug 13 01:07:52.995253 containerd[1478]: time="2025-08-13T01:07:52.995228363Z" level=info msg="StopPodSandbox for \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\" returns successfully" Aug 13 01:07:52.996616 containerd[1478]: time="2025-08-13T01:07:52.996407613Z" level=info msg="RemovePodSandbox for \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\"" Aug 13 01:07:52.997892 containerd[1478]: time="2025-08-13T01:07:52.996681464Z" level=info msg="Forcibly stopping sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\"" Aug 13 01:07:52.997892 containerd[1478]: time="2025-08-13T01:07:52.996753474Z" level=info msg="TearDown network for sandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\" successfully" Aug 13 01:07:53.000563 containerd[1478]: time="2025-08-13T01:07:53.000532846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:53.000563 containerd[1478]: time="2025-08-13T01:07:53.000564396Z" level=info msg="RemovePodSandbox \"54583cda861fd853e4d4b9a5a42ad32ba946ebd2feee20fc07f0eb3fb1ae2fcd\" returns successfully" Aug 13 01:07:53.000885 containerd[1478]: time="2025-08-13T01:07:53.000857656Z" level=info msg="StopPodSandbox for \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\"" Aug 13 01:07:53.001505 containerd[1478]: time="2025-08-13T01:07:53.001471256Z" level=info msg="TearDown network for sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\" successfully" Aug 13 01:07:53.001505 containerd[1478]: time="2025-08-13T01:07:53.001492076Z" level=info msg="StopPodSandbox for \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\" returns successfully" Aug 13 01:07:53.001791 containerd[1478]: time="2025-08-13T01:07:53.001768716Z" level=info msg="RemovePodSandbox for \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\"" Aug 13 01:07:53.001838 containerd[1478]: time="2025-08-13T01:07:53.001791356Z" level=info msg="Forcibly stopping sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\"" Aug 13 01:07:53.001882 containerd[1478]: time="2025-08-13T01:07:53.001851266Z" level=info msg="TearDown network for sandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\" successfully" Aug 13 01:07:53.004878 containerd[1478]: time="2025-08-13T01:07:53.004815488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:53.005121 containerd[1478]: time="2025-08-13T01:07:53.005058488Z" level=info msg="RemovePodSandbox \"097abe4474fecd55fada477e4f336fb6253ffdba4d76274447e2739b0e2e79e6\" returns successfully" Aug 13 01:07:53.005839 kubelet[1833]: I0813 01:07:53.005659 1833 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:07:53.030936 kubelet[1833]: I0813 01:07:53.030870 1833 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:53.031081 kubelet[1833]: I0813 01:07:53.031065 1833 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-67866967cc-xcncr","calico-apiserver/calico-apiserver-67866967cc-2lw7j","calico-system/goldmane-768f4c5c69-x4s2d","calico-system/whisker-65ffbb4b4d-js9cm","tigera-operator/tigera-operator-747864d56d-wmzmk","default/nginx-deployment-7fcdb87857-rf29h","calico-system/calico-node-zkd5h","kube-system/kube-proxy-qgfjh","calico-system/csi-node-driver-bc77x"] Aug 13 01:07:53.236944 containerd[1478]: time="2025-08-13T01:07:53.236865704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:53.238847 containerd[1478]: time="2025-08-13T01:07:53.238780475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 01:07:53.239380 containerd[1478]: time="2025-08-13T01:07:53.239235465Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:53.243926 containerd[1478]: time="2025-08-13T01:07:53.243280617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:53.247200 containerd[1478]: time="2025-08-13T01:07:53.247176819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.399042999s" Aug 13 01:07:53.247283 containerd[1478]: time="2025-08-13T01:07:53.247268609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 01:07:53.248197 containerd[1478]: time="2025-08-13T01:07:53.248178419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 01:07:53.252965 containerd[1478]: time="2025-08-13T01:07:53.252926282Z" level=info msg="CreateContainer within sandbox \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 01:07:53.263465 containerd[1478]: time="2025-08-13T01:07:53.263381997Z" level=info msg="CreateContainer within sandbox \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\"" Aug 13 01:07:53.263736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963120796.mount: Deactivated successfully. Aug 13 01:07:53.264160 containerd[1478]: time="2025-08-13T01:07:53.264130197Z" level=info msg="StartContainer for \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\"" Aug 13 01:07:53.299054 systemd[1]: Started cri-containerd-5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9.scope - libcontainer container 5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9. Aug 13 01:07:53.339090 containerd[1478]: time="2025-08-13T01:07:53.339041855Z" level=info msg="StartContainer for \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\" returns successfully" Aug 13 01:07:53.771712 kubelet[1833]: E0813 01:07:53.771680 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:54.066192 containerd[1478]: time="2025-08-13T01:07:54.065719147Z" level=info msg="StopContainer for \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\" with timeout 2 (s)" Aug 13 01:07:54.066783 containerd[1478]: time="2025-08-13T01:07:54.066227177Z" level=info msg="Stop container \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\" with signal terminated" Aug 13 01:07:54.082974 systemd[1]: cri-containerd-5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9.scope: Deactivated successfully. Aug 13 01:07:54.094328 kubelet[1833]: I0813 01:07:54.094264 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" podStartSLOduration=95.550678526 podStartE2EDuration="1m40.094250196s" podCreationTimestamp="2025-08-13 01:06:14 +0000 UTC" firstStartedPulling="2025-08-13 01:07:48.704488279 +0000 UTC m=+16.603607027" lastFinishedPulling="2025-08-13 01:07:53.248059949 +0000 UTC m=+21.147178697" observedRunningTime="2025-08-13 01:07:54.092736885 +0000 UTC m=+21.991855633" watchObservedRunningTime="2025-08-13 01:07:54.094250196 +0000 UTC m=+21.993368944" Aug 13 01:07:54.094468 kubelet[1833]: I0813 01:07:54.094366 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-rf29h" podStartSLOduration=4.543363781 podStartE2EDuration="8.094361036s" podCreationTimestamp="2025-08-13 01:07:46 +0000 UTC" firstStartedPulling="2025-08-13 01:07:47.640294367 +0000 UTC m=+15.539413115" lastFinishedPulling="2025-08-13 01:07:51.191291622 +0000 UTC m=+19.090410370" observedRunningTime="2025-08-13 01:07:52.070093011 +0000 UTC m=+19.969211769" watchObservedRunningTime="2025-08-13 01:07:54.094361036 +0000 UTC m=+21.993479794" Aug 13 01:07:54.129343 containerd[1478]: time="2025-08-13T01:07:54.129182467Z" level=info msg="shim disconnected" id=5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9 namespace=k8s.io Aug 13 01:07:54.129779 containerd[1478]: time="2025-08-13T01:07:54.129761268Z" level=warning msg="cleaning up after shim disconnected" id=5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9 namespace=k8s.io Aug 13 01:07:54.129943 containerd[1478]: time="2025-08-13T01:07:54.129924758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:07:54.150166 containerd[1478]: time="2025-08-13T01:07:54.150129965Z" level=info msg="StopContainer for \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\" returns successfully" Aug 13 01:07:54.150829 containerd[1478]: time="2025-08-13T01:07:54.150803185Z" level=info msg="StopPodSandbox for \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\"" Aug 13 01:07:54.150878 containerd[1478]: time="2025-08-13T01:07:54.150833375Z" level=info msg="Container to stop \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:07:54.167410 systemd[1]: cri-containerd-d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02.scope: Deactivated successfully. Aug 13 01:07:54.209346 containerd[1478]: time="2025-08-13T01:07:54.209292794Z" level=info msg="shim disconnected" id=d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02 namespace=k8s.io Aug 13 01:07:54.210335 containerd[1478]: time="2025-08-13T01:07:54.210196005Z" level=warning msg="cleaning up after shim disconnected" id=d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02 namespace=k8s.io Aug 13 01:07:54.210335 containerd[1478]: time="2025-08-13T01:07:54.210212605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:07:54.264689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9-rootfs.mount: Deactivated successfully. Aug 13 01:07:54.269703 kubelet[1833]: E0813 01:07:54.267759 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\": failed to extract layer sha256:a73db091f6b7038cfaf80deb11784c02c8c426bcd3bb4751a1a1f2c79dd01fc2: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2375195011: write /var/lib/containerd/tmpmounts/containerd-mount2375195011/goldmane: no space left on device: unknown" image="ghcr.io/flatcar/calico/goldmane:v3.30.2" Aug 13 01:07:54.269703 kubelet[1833]: E0813 01:07:54.267803 1833 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\": failed to extract layer sha256:a73db091f6b7038cfaf80deb11784c02c8c426bcd3bb4751a1a1f2c79dd01fc2: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2375195011: write /var/lib/containerd/tmpmounts/containerd-mount2375195011/goldmane: no space left on device: unknown" image="ghcr.io/flatcar/calico/goldmane:v3.30.2" Aug 13 01:07:54.269845 containerd[1478]: time="2025-08-13T01:07:54.267560025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\": failed to extract layer sha256:a73db091f6b7038cfaf80deb11784c02c8c426bcd3bb4751a1a1f2c79dd01fc2: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2375195011: write /var/lib/containerd/tmpmounts/containerd-mount2375195011/goldmane: no space left on device: unknown" Aug 13 01:07:54.269845 containerd[1478]: time="2025-08-13T01:07:54.267629035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 01:07:54.269845 containerd[1478]: time="2025-08-13T01:07:54.268420425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 01:07:54.264818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02-rootfs.mount: Deactivated successfully. Aug 13 01:07:54.264915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02-shm.mount: Deactivated successfully. Aug 13 01:07:54.270074 kubelet[1833]: E0813 01:07:54.268035 1833 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ctnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-768f4c5c69-x4s2d_calico-system(c98841f2-352f-43ac-b754-01bf12142833): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\": failed to extract layer sha256:a73db091f6b7038cfaf80deb11784c02c8c426bcd3bb4751a1a1f2c79dd01fc2: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2375195011: write /var/lib/containerd/tmpmounts/containerd-mount2375195011/goldmane: no space left on device: unknown" logger="UnhandledError" Aug 13 01:07:54.272015 kubelet[1833]: E0813 01:07:54.270678 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.2\\\": failed to extract layer sha256:a73db091f6b7038cfaf80deb11784c02c8c426bcd3bb4751a1a1f2c79dd01fc2: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2375195011: write /var/lib/containerd/tmpmounts/containerd-mount2375195011/goldmane: no space left on device: unknown\"" pod="calico-system/goldmane-768f4c5c69-x4s2d" podUID="c98841f2-352f-43ac-b754-01bf12142833" Aug 13 01:07:54.271276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375195011.mount: Deactivated successfully. Aug 13 01:07:54.312298 systemd-networkd[1393]: caliaef0fc32422: Link DOWN Aug 13 01:07:54.312614 systemd-networkd[1393]: caliaef0fc32422: Lost carrier Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.310 [INFO][4085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.311 [INFO][4085] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" iface="eth0" netns="/var/run/netns/cni-25bd6c7e-5857-3895-4e01-0c3f7b7f5067" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.311 [INFO][4085] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" iface="eth0" netns="/var/run/netns/cni-25bd6c7e-5857-3895-4e01-0c3f7b7f5067" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.319 [INFO][4085] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" after=8.060302ms iface="eth0" netns="/var/run/netns/cni-25bd6c7e-5857-3895-4e01-0c3f7b7f5067" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.319 [INFO][4085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.319 [INFO][4085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.340 [INFO][4093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.340 [INFO][4093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.341 [INFO][4093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.397 [INFO][4093] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.397 [INFO][4093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.398 [INFO][4093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:54.403026 containerd[1478]: 2025-08-13 01:07:54.401 [INFO][4085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:54.405047 containerd[1478]: time="2025-08-13T01:07:54.405011850Z" level=info msg="TearDown network for sandbox \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\" successfully" Aug 13 01:07:54.405047 containerd[1478]: time="2025-08-13T01:07:54.405043880Z" level=info msg="StopPodSandbox for \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\" returns successfully" Aug 13 01:07:54.405468 systemd[1]: run-netns-cni\x2d25bd6c7e\x2d5857\x2d3895\x2d4e01\x2d0c3f7b7f5067.mount: Deactivated successfully. Aug 13 01:07:54.411163 kubelet[1833]: I0813 01:07:54.411107 1833 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-67866967cc-xcncr" Aug 13 01:07:54.411163 kubelet[1833]: I0813 01:07:54.411134 1833 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-67866967cc-xcncr"] Aug 13 01:07:54.422963 containerd[1478]: time="2025-08-13T01:07:54.422937796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.2\": write /var/lib/containerd/io.containerd.content.v1.content/ingest/0d58aac648b7d05349727d0a1677cdec1ab3180912691c7c62d9d7e8fc2d59ae/ref: no space left on device" Aug 13 01:07:54.423625 containerd[1478]: time="2025-08-13T01:07:54.423240406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=73" Aug 13 01:07:54.423671 kubelet[1833]: E0813 01:07:54.423464 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.2\": write /var/lib/containerd/io.containerd.content.v1.content/ingest/0d58aac648b7d05349727d0a1677cdec1ab3180912691c7c62d9d7e8fc2d59ae/ref: no space left on device" image="ghcr.io/flatcar/calico/whisker:v3.30.2" Aug 13 01:07:54.423671 kubelet[1833]: E0813 01:07:54.423522 1833 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.2\": write /var/lib/containerd/io.containerd.content.v1.content/ingest/0d58aac648b7d05349727d0a1677cdec1ab3180912691c7c62d9d7e8fc2d59ae/ref: no space left on device" image="ghcr.io/flatcar/calico/whisker:v3.30.2" Aug 13 01:07:54.424055 kubelet[1833]: E0813 01:07:54.423731 1833 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5cbf12d2e6844b50aa1c29ba0b64424f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8mxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.2\": write /var/lib/containerd/io.containerd.content.v1.content/ingest/0d58aac648b7d05349727d0a1677cdec1ab3180912691c7c62d9d7e8fc2d59ae/ref: no space left on device" logger="UnhandledError" Aug 13 01:07:54.424191 containerd[1478]: time="2025-08-13T01:07:54.423885357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 01:07:54.451377 containerd[1478]: time="2025-08-13T01:07:54.451313675Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-zkd5h_8074bc93-b91c-448d-80a1-893c9f8548f6/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-zkd5h_8074bc93-b91c-448d-80a1-893c9f8548f6/calico-node/0.log: no space left on device" Aug 13 01:07:54.451444 containerd[1478]: time="2025-08-13T01:07:54.451393725Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-zkd5h_8074bc93-b91c-448d-80a1-893c9f8548f6/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-zkd5h_8074bc93-b91c-448d-80a1-893c9f8548f6/calico-node/0.log: no space left on device" Aug 13 01:07:54.470595 containerd[1478]: time="2025-08-13T01:07:54.470549342Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-zkd5h_8074bc93-b91c-448d-80a1-893c9f8548f6/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-zkd5h_8074bc93-b91c-448d-80a1-893c9f8548f6/calico-node/0.log: no space left on device" Aug 13 01:07:54.524302 kubelet[1833]: I0813 01:07:54.523945 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkrpd\" (UniqueName: \"kubernetes.io/projected/5bf0817c-2d50-4b1c-b228-be18c278aa6e-kube-api-access-zkrpd\") pod \"5bf0817c-2d50-4b1c-b228-be18c278aa6e\" (UID: \"5bf0817c-2d50-4b1c-b228-be18c278aa6e\") " Aug 13 01:07:54.524302 kubelet[1833]: I0813 01:07:54.523990 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5bf0817c-2d50-4b1c-b228-be18c278aa6e-calico-apiserver-certs\") pod \"5bf0817c-2d50-4b1c-b228-be18c278aa6e\" (UID: \"5bf0817c-2d50-4b1c-b228-be18c278aa6e\") " Aug 13 01:07:54.526891 kubelet[1833]: I0813 01:07:54.526858 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf0817c-2d50-4b1c-b228-be18c278aa6e-kube-api-access-zkrpd" (OuterVolumeSpecName: "kube-api-access-zkrpd") pod "5bf0817c-2d50-4b1c-b228-be18c278aa6e" (UID: "5bf0817c-2d50-4b1c-b228-be18c278aa6e"). InnerVolumeSpecName "kube-api-access-zkrpd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:07:54.527588 kubelet[1833]: I0813 01:07:54.527559 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf0817c-2d50-4b1c-b228-be18c278aa6e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "5bf0817c-2d50-4b1c-b228-be18c278aa6e" (UID: "5bf0817c-2d50-4b1c-b228-be18c278aa6e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:07:54.530241 systemd[1]: var-lib-kubelet-pods-5bf0817c\x2d2d50\x2d4b1c\x2db228\x2dbe18c278aa6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzkrpd.mount: Deactivated successfully. Aug 13 01:07:54.533178 systemd[1]: var-lib-kubelet-pods-5bf0817c\x2d2d50\x2d4b1c\x2db228\x2dbe18c278aa6e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:07:54.575914 containerd[1478]: time="2025-08-13T01:07:54.575857918Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:54.576339 containerd[1478]: time="2025-08-13T01:07:54.576302238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 01:07:54.578095 containerd[1478]: time="2025-08-13T01:07:54.578075359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 154.149772ms" Aug 13 01:07:54.578153 containerd[1478]: time="2025-08-13T01:07:54.578100059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 01:07:54.579249 containerd[1478]: time="2025-08-13T01:07:54.579232188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:07:54.581390 containerd[1478]: time="2025-08-13T01:07:54.581359339Z" level=info msg="CreateContainer within sandbox \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 01:07:54.594596 containerd[1478]: time="2025-08-13T01:07:54.594574864Z" level=info msg="CreateContainer within sandbox \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\"" Aug 13 01:07:54.596000 containerd[1478]: time="2025-08-13T01:07:54.594992714Z" level=info msg="StartContainer for \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\"" Aug 13 01:07:54.624246 kubelet[1833]: I0813 01:07:54.624216 1833 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkrpd\" (UniqueName: \"kubernetes.io/projected/5bf0817c-2d50-4b1c-b228-be18c278aa6e-kube-api-access-zkrpd\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:54.624401 kubelet[1833]: I0813 01:07:54.624388 1833 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5bf0817c-2d50-4b1c-b228-be18c278aa6e-calico-apiserver-certs\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:54.625182 systemd[1]: Started cri-containerd-e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34.scope - libcontainer container e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34. Aug 13 01:07:54.664310 containerd[1478]: time="2025-08-13T01:07:54.664215337Z" level=info msg="StartContainer for \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\" returns successfully" Aug 13 01:07:54.772458 kubelet[1833]: E0813 01:07:54.772405 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:54.862665 systemd[1]: Removed slice kubepods-besteffort-pod5bf0817c_2d50_4b1c_b228_be18c278aa6e.slice - libcontainer container kubepods-besteffort-pod5bf0817c_2d50_4b1c_b228_be18c278aa6e.slice. Aug 13 01:07:55.071991 kubelet[1833]: I0813 01:07:55.071882 1833 scope.go:117] "RemoveContainer" containerID="5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9" Aug 13 01:07:55.074224 containerd[1478]: time="2025-08-13T01:07:55.074041316Z" level=info msg="RemoveContainer for \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\"" Aug 13 01:07:55.076259 kubelet[1833]: E0813 01:07:55.075125 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.2\\\": failed to extract layer sha256:a73db091f6b7038cfaf80deb11784c02c8c426bcd3bb4751a1a1f2c79dd01fc2: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2375195011: write /var/lib/containerd/tmpmounts/containerd-mount2375195011/goldmane: no space left on device: unknown\"" pod="calico-system/goldmane-768f4c5c69-x4s2d" podUID="c98841f2-352f-43ac-b754-01bf12142833" Aug 13 01:07:55.081563 containerd[1478]: time="2025-08-13T01:07:55.081523398Z" level=info msg="RemoveContainer for \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\" returns successfully" Aug 13 01:07:55.082819 kubelet[1833]: I0813 01:07:55.082015 1833 scope.go:117] "RemoveContainer" containerID="5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9" Aug 13 01:07:55.083078 containerd[1478]: time="2025-08-13T01:07:55.083026638Z" level=error msg="ContainerStatus for \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\": not found" Aug 13 01:07:55.083576 kubelet[1833]: E0813 01:07:55.083545 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\": not found" containerID="5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9" Aug 13 01:07:55.083674 kubelet[1833]: I0813 01:07:55.083576 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9"} err="failed to get container status \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b8c7ccc0fb2d643202d176a173cd84e923c4ba60b8c9a6695f09f603600bdd9\": not found" Aug 13 01:07:55.104310 kubelet[1833]: I0813 01:07:55.104213 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" podStartSLOduration=92.632978007 podStartE2EDuration="1m38.104201346s" podCreationTimestamp="2025-08-13 01:06:17 +0000 UTC" firstStartedPulling="2025-08-13 01:07:49.10750781 +0000 UTC m=+17.006626558" lastFinishedPulling="2025-08-13 01:07:54.578731149 +0000 UTC m=+22.477849897" observedRunningTime="2025-08-13 01:07:55.088406021 +0000 UTC m=+22.987524769" watchObservedRunningTime="2025-08-13 01:07:55.104201346 +0000 UTC m=+23.003320094" Aug 13 01:07:55.354913 containerd[1478]: time="2025-08-13T01:07:55.354822333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:55.355727 containerd[1478]: time="2025-08-13T01:07:55.355543693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:07:55.357355 containerd[1478]: time="2025-08-13T01:07:55.356256474Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:55.358039 containerd[1478]: time="2025-08-13T01:07:55.357989103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:07:55.358721 containerd[1478]: time="2025-08-13T01:07:55.358688954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 779.433976ms" Aug 13 01:07:55.358769 containerd[1478]: time="2025-08-13T01:07:55.358722254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 01:07:55.361130 containerd[1478]: time="2025-08-13T01:07:55.360886235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 01:07:55.363105 containerd[1478]: time="2025-08-13T01:07:55.363070205Z" level=info msg="CreateContainer within sandbox \"3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 01:07:55.378815 containerd[1478]: time="2025-08-13T01:07:55.378586031Z" level=info msg="CreateContainer within sandbox \"3aa7d1877fba8fa8db89b60eaac1a2aecdf17db4cd9b5353babb61d506cdec68\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9fa74f800fb6ae6e3436c8af760617006c0850deee0745ec8fac96ec4d5c133d\"" Aug 13 01:07:55.379335 containerd[1478]: time="2025-08-13T01:07:55.379314401Z" level=info msg="StartContainer for \"9fa74f800fb6ae6e3436c8af760617006c0850deee0745ec8fac96ec4d5c133d\"" Aug 13 01:07:55.411272 kubelet[1833]: I0813 01:07:55.411247 1833 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-67866967cc-xcncr"] Aug 13 01:07:55.417205 systemd[1]: Started cri-containerd-9fa74f800fb6ae6e3436c8af760617006c0850deee0745ec8fac96ec4d5c133d.scope - libcontainer container 9fa74f800fb6ae6e3436c8af760617006c0850deee0745ec8fac96ec4d5c133d. Aug 13 01:07:55.454143 containerd[1478]: time="2025-08-13T01:07:55.453974396Z" level=info msg="StartContainer for \"9fa74f800fb6ae6e3436c8af760617006c0850deee0745ec8fac96ec4d5c133d\" returns successfully" Aug 13 01:07:55.462519 kubelet[1833]: I0813 01:07:55.462482 1833 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:55.462600 kubelet[1833]: I0813 01:07:55.462528 1833 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:07:55.465529 containerd[1478]: time="2025-08-13T01:07:55.465503661Z" level=info msg="StopPodSandbox for \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\"" Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.520 [INFO][4207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.523 [INFO][4207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" iface="eth0" netns="" Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.523 [INFO][4207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.523 [INFO][4207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.556 [INFO][4214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.557 [INFO][4214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.557 [INFO][4214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.564 [WARNING][4214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.564 [INFO][4214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.566 [INFO][4214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:55.572377 containerd[1478]: 2025-08-13 01:07:55.568 [INFO][4207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:55.572377 containerd[1478]: time="2025-08-13T01:07:55.572138567Z" level=info msg="TearDown network for sandbox \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\" successfully" Aug 13 01:07:55.572377 containerd[1478]: time="2025-08-13T01:07:55.572162217Z" level=info msg="StopPodSandbox for \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\" returns successfully" Aug 13 01:07:55.572817 containerd[1478]: time="2025-08-13T01:07:55.572803688Z" level=info msg="RemovePodSandbox for \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\"" Aug 13 01:07:55.574296 containerd[1478]: time="2025-08-13T01:07:55.572825518Z" level=info msg="Forcibly stopping sandbox \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\"" Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.624 [INFO][4229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.624 [INFO][4229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" iface="eth0" netns="" Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.624 [INFO][4229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.624 [INFO][4229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.644 [INFO][4237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.644 [INFO][4237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.644 [INFO][4237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.651 [WARNING][4237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.651 [INFO][4237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" HandleID="k8s-pod-network.d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--xcncr-eth0" Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.652 [INFO][4237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:55.657107 containerd[1478]: 2025-08-13 01:07:55.655 [INFO][4229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02" Aug 13 01:07:55.658033 containerd[1478]: time="2025-08-13T01:07:55.657966137Z" level=info msg="TearDown network for sandbox \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\" successfully" Aug 13 01:07:55.661594 containerd[1478]: time="2025-08-13T01:07:55.661427349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:55.661594 containerd[1478]: time="2025-08-13T01:07:55.661491579Z" level=info msg="RemovePodSandbox \"d49ab9f74a3dcdadcb47933bd65680ff1a2282b92d55f9f3e5491c3452f9ab02\" returns successfully" Aug 13 01:07:55.662330 kubelet[1833]: I0813 01:07:55.662304 1833 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:07:55.674205 kubelet[1833]: I0813 01:07:55.674186 1833 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:55.674275 kubelet[1833]: I0813 01:07:55.674246 1833 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-768f4c5c69-x4s2d","calico-system/whisker-65ffbb4b4d-js9cm","tigera-operator/tigera-operator-747864d56d-wmzmk","calico-apiserver/calico-apiserver-67866967cc-2lw7j","default/nginx-deployment-7fcdb87857-rf29h","calico-system/calico-node-zkd5h","kube-system/kube-proxy-qgfjh","calico-system/csi-node-driver-bc77x"] Aug 13 01:07:55.772975 kubelet[1833]: E0813 01:07:55.772788 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:55.855189 kubelet[1833]: I0813 01:07:55.855157 1833 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 01:07:55.855189 kubelet[1833]: I0813 01:07:55.855199 1833 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 01:07:56.077814 kubelet[1833]: I0813 01:07:56.077690 1833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:07:56.079333 containerd[1478]: time="2025-08-13T01:07:56.078760814Z" level=info msg="StopPodSandbox for \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\"" Aug 13 01:07:56.083228 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881-shm.mount: Deactivated successfully. Aug 13 01:07:56.089391 systemd[1]: cri-containerd-f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881.scope: Deactivated successfully. Aug 13 01:07:56.113193 kubelet[1833]: I0813 01:07:56.112762 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bc77x" podStartSLOduration=16.271193497 podStartE2EDuration="23.112595535s" podCreationTimestamp="2025-08-13 01:07:33 +0000 UTC" firstStartedPulling="2025-08-13 01:07:48.518523406 +0000 UTC m=+16.417642154" lastFinishedPulling="2025-08-13 01:07:55.359925444 +0000 UTC m=+23.259044192" observedRunningTime="2025-08-13 01:07:56.0981768 +0000 UTC m=+23.997295548" watchObservedRunningTime="2025-08-13 01:07:56.112595535 +0000 UTC m=+24.011714283" Aug 13 01:07:56.117429 containerd[1478]: time="2025-08-13T01:07:56.117371948Z" level=info msg="shim disconnected" id=f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881 namespace=k8s.io Aug 13 01:07:56.117429 containerd[1478]: time="2025-08-13T01:07:56.117428368Z" level=warning msg="cleaning up after shim disconnected" id=f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881 namespace=k8s.io Aug 13 01:07:56.117521 containerd[1478]: time="2025-08-13T01:07:56.117436938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:07:56.239661 systemd-networkd[1393]: calic455ddf6d00: Link DOWN Aug 13 01:07:56.240244 systemd-networkd[1393]: calic455ddf6d00: Lost carrier Aug 13 01:07:56.262874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881-rootfs.mount: Deactivated successfully. Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.237 [INFO][4288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.237 [INFO][4288] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" iface="eth0" netns="/var/run/netns/cni-a53b0134-1076-7ace-f023-0784c40312fb" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.238 [INFO][4288] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" iface="eth0" netns="/var/run/netns/cni-a53b0134-1076-7ace-f023-0784c40312fb" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.247 [INFO][4288] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" after=9.984634ms iface="eth0" netns="/var/run/netns/cni-a53b0134-1076-7ace-f023-0784c40312fb" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.248 [INFO][4288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.249 [INFO][4288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.286 [INFO][4297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.286 [INFO][4297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.286 [INFO][4297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.316 [INFO][4297] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.316 [INFO][4297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.318 [INFO][4297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:56.330968 containerd[1478]: 2025-08-13 01:07:56.323 [INFO][4288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:56.331195 systemd[1]: run-netns-cni\x2da53b0134\x2d1076\x2d7ace\x2df023\x2d0784c40312fb.mount: Deactivated successfully. Aug 13 01:07:56.332928 containerd[1478]: time="2025-08-13T01:07:56.332522034Z" level=info msg="TearDown network for sandbox \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\" successfully" Aug 13 01:07:56.332928 containerd[1478]: time="2025-08-13T01:07:56.332557234Z" level=info msg="StopPodSandbox for \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\" returns successfully" Aug 13 01:07:56.338683 kubelet[1833]: I0813 01:07:56.338580 1833 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-768f4c5c69-x4s2d" Aug 13 01:07:56.338683 kubelet[1833]: I0813 01:07:56.338601 1833 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-768f4c5c69-x4s2d"] Aug 13 01:07:56.375836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164116762.mount: Deactivated successfully. Aug 13 01:07:56.377805 containerd[1478]: time="2025-08-13T01:07:56.377204790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2164116762: write /var/lib/containerd/tmpmounts/containerd-mount2164116762/whisker-backend: no space left on device: unknown" Aug 13 01:07:56.377805 containerd[1478]: time="2025-08-13T01:07:56.377285830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 01:07:56.378020 kubelet[1833]: E0813 01:07:56.377418 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2164116762: write /var/lib/containerd/tmpmounts/containerd-mount2164116762/whisker-backend: no space left on device: unknown" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.2" Aug 13 01:07:56.378020 kubelet[1833]: E0813 01:07:56.377457 1833 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2164116762: write /var/lib/containerd/tmpmounts/containerd-mount2164116762/whisker-backend: no space left on device: unknown" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.2" Aug 13 01:07:56.378020 kubelet[1833]: E0813 01:07:56.377557 1833 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8mxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65ffbb4b4d-js9cm_calico-system(61ce1787-bb8a-413c-8736-b5b6cbd4da1d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2164116762: write /var/lib/containerd/tmpmounts/containerd-mount2164116762/whisker-backend: no space left on device: unknown" logger="UnhandledError" Aug 13 01:07:56.378957 kubelet[1833]: E0813 01:07:56.378852 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.2\\\": write /var/lib/containerd/io.containerd.content.v1.content/ingest/0d58aac648b7d05349727d0a1677cdec1ab3180912691c7c62d9d7e8fc2d59ae/ref: no space left on device\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\\\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2164116762: write /var/lib/containerd/tmpmounts/containerd-mount2164116762/whisker-backend: no space left on device: unknown\"]" pod="calico-system/whisker-65ffbb4b4d-js9cm" podUID="61ce1787-bb8a-413c-8736-b5b6cbd4da1d" Aug 13 01:07:56.435618 kubelet[1833]: I0813 01:07:56.435560 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c98841f2-352f-43ac-b754-01bf12142833-goldmane-ca-bundle\") pod \"c98841f2-352f-43ac-b754-01bf12142833\" (UID: \"c98841f2-352f-43ac-b754-01bf12142833\") " Aug 13 01:07:56.436052 kubelet[1833]: I0813 01:07:56.436014 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c98841f2-352f-43ac-b754-01bf12142833-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "c98841f2-352f-43ac-b754-01bf12142833" (UID: "c98841f2-352f-43ac-b754-01bf12142833"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:07:56.436150 kubelet[1833]: I0813 01:07:56.436073 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ctnk\" (UniqueName: \"kubernetes.io/projected/c98841f2-352f-43ac-b754-01bf12142833-kube-api-access-5ctnk\") pod \"c98841f2-352f-43ac-b754-01bf12142833\" (UID: \"c98841f2-352f-43ac-b754-01bf12142833\") " Aug 13 01:07:56.437913 kubelet[1833]: I0813 01:07:56.436417 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c98841f2-352f-43ac-b754-01bf12142833-config\") pod \"c98841f2-352f-43ac-b754-01bf12142833\" (UID: \"c98841f2-352f-43ac-b754-01bf12142833\") " Aug 13 01:07:56.437913 kubelet[1833]: I0813 01:07:56.436453 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c98841f2-352f-43ac-b754-01bf12142833-goldmane-key-pair\") pod \"c98841f2-352f-43ac-b754-01bf12142833\" (UID: \"c98841f2-352f-43ac-b754-01bf12142833\") " Aug 13 01:07:56.437913 kubelet[1833]: I0813 01:07:56.436523 1833 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c98841f2-352f-43ac-b754-01bf12142833-goldmane-ca-bundle\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:56.439749 kubelet[1833]: I0813 01:07:56.439726 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98841f2-352f-43ac-b754-01bf12142833-kube-api-access-5ctnk" (OuterVolumeSpecName: "kube-api-access-5ctnk") pod "c98841f2-352f-43ac-b754-01bf12142833" (UID: "c98841f2-352f-43ac-b754-01bf12142833"). InnerVolumeSpecName "kube-api-access-5ctnk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:07:56.440247 kubelet[1833]: I0813 01:07:56.440193 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c98841f2-352f-43ac-b754-01bf12142833-config" (OuterVolumeSpecName: "config") pod "c98841f2-352f-43ac-b754-01bf12142833" (UID: "c98841f2-352f-43ac-b754-01bf12142833"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:07:56.441882 systemd[1]: var-lib-kubelet-pods-c98841f2\x2d352f\x2d43ac\x2db754\x2d01bf12142833-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5ctnk.mount: Deactivated successfully. Aug 13 01:07:56.444252 kubelet[1833]: I0813 01:07:56.443926 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c98841f2-352f-43ac-b754-01bf12142833-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "c98841f2-352f-43ac-b754-01bf12142833" (UID: "c98841f2-352f-43ac-b754-01bf12142833"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:07:56.445009 systemd[1]: var-lib-kubelet-pods-c98841f2\x2d352f\x2d43ac\x2db754\x2d01bf12142833-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:07:56.537033 kubelet[1833]: I0813 01:07:56.536988 1833 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c98841f2-352f-43ac-b754-01bf12142833-config\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:56.537033 kubelet[1833]: I0813 01:07:56.537020 1833 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c98841f2-352f-43ac-b754-01bf12142833-goldmane-key-pair\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:56.537033 kubelet[1833]: I0813 01:07:56.537032 1833 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5ctnk\" (UniqueName: \"kubernetes.io/projected/c98841f2-352f-43ac-b754-01bf12142833-kube-api-access-5ctnk\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:56.774140 kubelet[1833]: E0813 01:07:56.774077 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:56.857701 systemd[1]: Removed slice kubepods-besteffort-podc98841f2_352f_43ac_b754_01bf12142833.slice - libcontainer container kubepods-besteffort-podc98841f2_352f_43ac_b754_01bf12142833.slice. Aug 13 01:07:57.083286 kubelet[1833]: E0813 01:07:57.083179 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.2\\\": write /var/lib/containerd/io.containerd.content.v1.content/ingest/0d58aac648b7d05349727d0a1677cdec1ab3180912691c7c62d9d7e8fc2d59ae/ref: no space left on device\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\\\": failed to extract layer sha256:0c4aca40533ff60094d182f7cdf301b884bdcb103a734e3c9d7ea28d1911abf5: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2164116762: write /var/lib/containerd/tmpmounts/containerd-mount2164116762/whisker-backend: no space left on device: unknown\"]" pod="calico-system/whisker-65ffbb4b4d-js9cm" podUID="61ce1787-bb8a-413c-8736-b5b6cbd4da1d" Aug 13 01:07:57.339495 kubelet[1833]: I0813 01:07:57.339348 1833 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-768f4c5c69-x4s2d"] Aug 13 01:07:57.348087 kubelet[1833]: I0813 01:07:57.348050 1833 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:57.348087 kubelet[1833]: I0813 01:07:57.348090 1833 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:07:57.349098 containerd[1478]: time="2025-08-13T01:07:57.349072739Z" level=info msg="StopPodSandbox for \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\"" Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.385 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.385 [INFO][4321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" iface="eth0" netns="" Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.385 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.385 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.405 [INFO][4328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.406 [INFO][4328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.406 [INFO][4328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.417 [WARNING][4328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.417 [INFO][4328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.418 [INFO][4328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:57.422526 containerd[1478]: 2025-08-13 01:07:57.420 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:57.422526 containerd[1478]: time="2025-08-13T01:07:57.422528656Z" level=info msg="TearDown network for sandbox \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\" successfully" Aug 13 01:07:57.422526 containerd[1478]: time="2025-08-13T01:07:57.422549476Z" level=info msg="StopPodSandbox for \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\" returns successfully" Aug 13 01:07:57.423355 containerd[1478]: time="2025-08-13T01:07:57.423332576Z" level=info msg="RemovePodSandbox for \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\"" Aug 13 01:07:57.423406 containerd[1478]: time="2025-08-13T01:07:57.423360516Z" level=info msg="Forcibly stopping sandbox \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\"" Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.467 [INFO][4342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.467 [INFO][4342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" iface="eth0" netns="" Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.467 [INFO][4342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.467 [INFO][4342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.488 [INFO][4350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.489 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.489 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.495 [WARNING][4350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.495 [INFO][4350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" HandleID="k8s-pod-network.f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Workload="192.168.133.100-k8s-goldmane--768f4c5c69--x4s2d-eth0" Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.496 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:57.504138 containerd[1478]: 2025-08-13 01:07:57.500 [INFO][4342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881" Aug 13 01:07:57.504573 containerd[1478]: time="2025-08-13T01:07:57.504535916Z" level=info msg="TearDown network for sandbox \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\" successfully" Aug 13 01:07:57.507506 containerd[1478]: time="2025-08-13T01:07:57.507484177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:57.507589 containerd[1478]: time="2025-08-13T01:07:57.507517877Z" level=info msg="RemovePodSandbox \"f0cbb73b62bfcd9ff6611e9545f87ee6984dbc199662212b7add39136d291881\" returns successfully" Aug 13 01:07:57.508057 kubelet[1833]: I0813 01:07:57.508005 1833 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:07:57.516385 kubelet[1833]: I0813 01:07:57.516369 1833 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:57.516458 kubelet[1833]: I0813 01:07:57.516442 1833 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-65ffbb4b4d-js9cm","tigera-operator/tigera-operator-747864d56d-wmzmk","calico-apiserver/calico-apiserver-67866967cc-2lw7j","default/nginx-deployment-7fcdb87857-rf29h","calico-system/calico-node-zkd5h","kube-system/kube-proxy-qgfjh","calico-system/csi-node-driver-bc77x"] Aug 13 01:07:57.774559 kubelet[1833]: E0813 01:07:57.774498 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:58.081951 containerd[1478]: time="2025-08-13T01:07:58.081614107Z" level=info msg="StopPodSandbox for \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\"" Aug 13 01:07:58.085623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1-shm.mount: Deactivated successfully. Aug 13 01:07:58.090335 systemd[1]: cri-containerd-44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1.scope: Deactivated successfully. Aug 13 01:07:58.110793 containerd[1478]: time="2025-08-13T01:07:58.110658578Z" level=info msg="shim disconnected" id=44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1 namespace=k8s.io Aug 13 01:07:58.110793 containerd[1478]: time="2025-08-13T01:07:58.110728148Z" level=warning msg="cleaning up after shim disconnected" id=44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1 namespace=k8s.io Aug 13 01:07:58.110793 containerd[1478]: time="2025-08-13T01:07:58.110736648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:07:58.111528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1-rootfs.mount: Deactivated successfully. Aug 13 01:07:58.164968 systemd-networkd[1393]: calif57c8335e0f: Link DOWN Aug 13 01:07:58.164976 systemd-networkd[1393]: calif57c8335e0f: Lost carrier Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.163 [INFO][4395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.163 [INFO][4395] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" iface="eth0" netns="/var/run/netns/cni-a06a3538-1b1e-6f48-964e-5e852a8e9c64" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.163 [INFO][4395] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" iface="eth0" netns="/var/run/netns/cni-a06a3538-1b1e-6f48-964e-5e852a8e9c64" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.174 [INFO][4395] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" after=10.355913ms iface="eth0" netns="/var/run/netns/cni-a06a3538-1b1e-6f48-964e-5e852a8e9c64" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.174 [INFO][4395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.174 [INFO][4395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.190 [INFO][4402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.190 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.190 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.221 [INFO][4402] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.221 [INFO][4402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.223 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:58.227375 containerd[1478]: 2025-08-13 01:07:58.225 [INFO][4395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:58.229552 systemd[1]: run-netns-cni\x2da06a3538\x2d1b1e\x2d6f48\x2d964e\x2d5e852a8e9c64.mount: Deactivated successfully. Aug 13 01:07:58.230165 containerd[1478]: time="2025-08-13T01:07:58.229723112Z" level=info msg="TearDown network for sandbox \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\" successfully" Aug 13 01:07:58.230165 containerd[1478]: time="2025-08-13T01:07:58.229745912Z" level=info msg="StopPodSandbox for \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\" returns successfully" Aug 13 01:07:58.234555 kubelet[1833]: I0813 01:07:58.234536 1833 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-65ffbb4b4d-js9cm" Aug 13 01:07:58.234555 kubelet[1833]: I0813 01:07:58.234555 1833 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-65ffbb4b4d-js9cm"] Aug 13 01:07:58.345823 kubelet[1833]: I0813 01:07:58.345513 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8mxt\" (UniqueName: \"kubernetes.io/projected/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-kube-api-access-n8mxt\") pod \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\" (UID: \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\") " Aug 13 01:07:58.345823 kubelet[1833]: I0813 01:07:58.345565 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-whisker-backend-key-pair\") pod \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\" (UID: \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\") " Aug 13 01:07:58.345823 kubelet[1833]: I0813 01:07:58.345586 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-whisker-ca-bundle\") pod \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\" (UID: \"61ce1787-bb8a-413c-8736-b5b6cbd4da1d\") " Aug 13 01:07:58.346001 kubelet[1833]: I0813 01:07:58.345953 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "61ce1787-bb8a-413c-8736-b5b6cbd4da1d" (UID: "61ce1787-bb8a-413c-8736-b5b6cbd4da1d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:07:58.350524 systemd[1]: var-lib-kubelet-pods-61ce1787\x2dbb8a\x2d413c\x2d8736\x2db5b6cbd4da1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn8mxt.mount: Deactivated successfully. Aug 13 01:07:58.351098 kubelet[1833]: I0813 01:07:58.350597 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "61ce1787-bb8a-413c-8736-b5b6cbd4da1d" (UID: "61ce1787-bb8a-413c-8736-b5b6cbd4da1d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:07:58.351245 kubelet[1833]: I0813 01:07:58.351220 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-kube-api-access-n8mxt" (OuterVolumeSpecName: "kube-api-access-n8mxt") pod "61ce1787-bb8a-413c-8736-b5b6cbd4da1d" (UID: "61ce1787-bb8a-413c-8736-b5b6cbd4da1d"). InnerVolumeSpecName "kube-api-access-n8mxt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:07:58.353376 systemd[1]: var-lib-kubelet-pods-61ce1787\x2dbb8a\x2d413c\x2d8736\x2db5b6cbd4da1d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:07:58.446138 kubelet[1833]: I0813 01:07:58.446109 1833 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-whisker-backend-key-pair\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:58.446138 kubelet[1833]: I0813 01:07:58.446138 1833 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-whisker-ca-bundle\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:58.446280 kubelet[1833]: I0813 01:07:58.446149 1833 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n8mxt\" (UniqueName: \"kubernetes.io/projected/61ce1787-bb8a-413c-8736-b5b6cbd4da1d-kube-api-access-n8mxt\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:58.775628 kubelet[1833]: E0813 01:07:58.775586 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:58.857447 systemd[1]: Removed slice kubepods-besteffort-pod61ce1787_bb8a_413c_8736_b5b6cbd4da1d.slice - libcontainer container kubepods-besteffort-pod61ce1787_bb8a_413c_8736_b5b6cbd4da1d.slice. Aug 13 01:07:59.235690 kubelet[1833]: I0813 01:07:59.235625 1833 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-65ffbb4b4d-js9cm"] Aug 13 01:07:59.255717 kubelet[1833]: I0813 01:07:59.255683 1833 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:59.255717 kubelet[1833]: I0813 01:07:59.255725 1833 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:07:59.256872 containerd[1478]: time="2025-08-13T01:07:59.256841459Z" level=info msg="StopPodSandbox for \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\"" Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.311 [INFO][4427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.311 [INFO][4427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" iface="eth0" netns="" Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.311 [INFO][4427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.311 [INFO][4427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.334 [INFO][4434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.334 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.334 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.344 [WARNING][4434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.344 [INFO][4434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.348 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:59.353472 containerd[1478]: 2025-08-13 01:07:59.350 [INFO][4427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:59.354048 containerd[1478]: time="2025-08-13T01:07:59.353726165Z" level=info msg="TearDown network for sandbox \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\" successfully" Aug 13 01:07:59.354048 containerd[1478]: time="2025-08-13T01:07:59.353819325Z" level=info msg="StopPodSandbox for \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\" returns successfully" Aug 13 01:07:59.354476 containerd[1478]: time="2025-08-13T01:07:59.354433085Z" level=info msg="RemovePodSandbox for \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\"" Aug 13 01:07:59.354476 containerd[1478]: time="2025-08-13T01:07:59.354459555Z" level=info msg="Forcibly stopping sandbox \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\"" Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.400 [INFO][4450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.401 [INFO][4450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" iface="eth0" netns="" Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.401 [INFO][4450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.401 [INFO][4450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.424 [INFO][4457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.424 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.424 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.431 [WARNING][4457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.431 [INFO][4457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" HandleID="k8s-pod-network.44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Workload="192.168.133.100-k8s-whisker--65ffbb4b4d--js9cm-eth0" Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.433 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:07:59.441360 containerd[1478]: 2025-08-13 01:07:59.435 [INFO][4450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1" Aug 13 01:07:59.441360 containerd[1478]: time="2025-08-13T01:07:59.437490207Z" level=info msg="TearDown network for sandbox \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\" successfully" Aug 13 01:07:59.443979 containerd[1478]: time="2025-08-13T01:07:59.443955550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:07:59.444076 containerd[1478]: time="2025-08-13T01:07:59.444046210Z" level=info msg="RemovePodSandbox \"44c4a8e6cc47c8921d984207fec79417f12004f77deedfe4fed6ee54f752fcc1\" returns successfully" Aug 13 01:07:59.444494 kubelet[1833]: I0813 01:07:59.444458 1833 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:07:59.465361 kubelet[1833]: I0813 01:07:59.465332 1833 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:59.465964 kubelet[1833]: I0813 01:07:59.465626 1833 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["tigera-operator/tigera-operator-747864d56d-wmzmk","calico-apiserver/calico-apiserver-67866967cc-2lw7j","default/nginx-deployment-7fcdb87857-rf29h","calico-system/calico-node-zkd5h","kube-system/kube-proxy-qgfjh","calico-system/csi-node-driver-bc77x"] Aug 13 01:07:59.466647 containerd[1478]: time="2025-08-13T01:07:59.466397548Z" level=info msg="StopContainer for \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\" with timeout 2 (s)" Aug 13 01:07:59.466847 containerd[1478]: time="2025-08-13T01:07:59.466797118Z" level=info msg="Stop container \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\" with signal terminated" Aug 13 01:07:59.640457 systemd[1]: cri-containerd-df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3.scope: Deactivated successfully. Aug 13 01:07:59.640773 systemd[1]: cri-containerd-df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3.scope: Consumed 612ms CPU time, 67.4M memory peak. Aug 13 01:07:59.662725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3-rootfs.mount: Deactivated successfully. Aug 13 01:07:59.665837 containerd[1478]: time="2025-08-13T01:07:59.665587864Z" level=info msg="shim disconnected" id=df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3 namespace=k8s.io Aug 13 01:07:59.665837 containerd[1478]: time="2025-08-13T01:07:59.665634994Z" level=warning msg="cleaning up after shim disconnected" id=df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3 namespace=k8s.io Aug 13 01:07:59.665837 containerd[1478]: time="2025-08-13T01:07:59.665834904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:07:59.684396 containerd[1478]: time="2025-08-13T01:07:59.684364101Z" level=info msg="StopContainer for \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\" returns successfully" Aug 13 01:07:59.684917 containerd[1478]: time="2025-08-13T01:07:59.684859031Z" level=info msg="StopPodSandbox for \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\"" Aug 13 01:07:59.685044 containerd[1478]: time="2025-08-13T01:07:59.684918851Z" level=info msg="Container to stop \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:07:59.687437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f-shm.mount: Deactivated successfully. Aug 13 01:07:59.693991 systemd[1]: cri-containerd-6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f.scope: Deactivated successfully. Aug 13 01:07:59.715351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f-rootfs.mount: Deactivated successfully. Aug 13 01:07:59.716410 containerd[1478]: time="2025-08-13T01:07:59.716308163Z" level=info msg="shim disconnected" id=6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f namespace=k8s.io Aug 13 01:07:59.716609 containerd[1478]: time="2025-08-13T01:07:59.716540803Z" level=warning msg="cleaning up after shim disconnected" id=6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f namespace=k8s.io Aug 13 01:07:59.716609 containerd[1478]: time="2025-08-13T01:07:59.716556643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:07:59.731634 containerd[1478]: time="2025-08-13T01:07:59.731590419Z" level=info msg="TearDown network for sandbox \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\" successfully" Aug 13 01:07:59.731634 containerd[1478]: time="2025-08-13T01:07:59.731615339Z" level=info msg="StopPodSandbox for \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\" returns successfully" Aug 13 01:07:59.736302 kubelet[1833]: I0813 01:07:59.736261 1833 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-747864d56d-wmzmk" Aug 13 01:07:59.736302 kubelet[1833]: I0813 01:07:59.736285 1833 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-wmzmk"] Aug 13 01:07:59.753597 kubelet[1833]: I0813 01:07:59.753555 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gssk5\" (UniqueName: \"kubernetes.io/projected/7f26481a-205b-42bf-bb1f-48df3d99d8eb-kube-api-access-gssk5\") pod \"7f26481a-205b-42bf-bb1f-48df3d99d8eb\" (UID: \"7f26481a-205b-42bf-bb1f-48df3d99d8eb\") " Aug 13 01:07:59.753597 kubelet[1833]: I0813 01:07:59.753589 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f26481a-205b-42bf-bb1f-48df3d99d8eb-var-lib-calico\") pod \"7f26481a-205b-42bf-bb1f-48df3d99d8eb\" (UID: \"7f26481a-205b-42bf-bb1f-48df3d99d8eb\") " Aug 13 01:07:59.753730 kubelet[1833]: I0813 01:07:59.753646 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f26481a-205b-42bf-bb1f-48df3d99d8eb-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "7f26481a-205b-42bf-bb1f-48df3d99d8eb" (UID: "7f26481a-205b-42bf-bb1f-48df3d99d8eb"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:07:59.756594 kubelet[1833]: I0813 01:07:59.756349 1833 kubelet.go:2405] "Pod admission denied" podUID="dbae65a3-6257-4349-bc08-63a990ecc8ec" pod="tigera-operator/tigera-operator-747864d56d-cn6s8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:07:59.758141 kubelet[1833]: I0813 01:07:59.758100 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f26481a-205b-42bf-bb1f-48df3d99d8eb-kube-api-access-gssk5" (OuterVolumeSpecName: "kube-api-access-gssk5") pod "7f26481a-205b-42bf-bb1f-48df3d99d8eb" (UID: "7f26481a-205b-42bf-bb1f-48df3d99d8eb"). InnerVolumeSpecName "kube-api-access-gssk5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:07:59.759205 systemd[1]: var-lib-kubelet-pods-7f26481a\x2d205b\x2d42bf\x2dbb1f\x2d48df3d99d8eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgssk5.mount: Deactivated successfully. Aug 13 01:07:59.776129 kubelet[1833]: E0813 01:07:59.776100 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:07:59.779937 kubelet[1833]: I0813 01:07:59.779893 1833 kubelet.go:2405] "Pod admission denied" podUID="d092797f-77c7-4cf6-9e6b-425473a83e05" pod="tigera-operator/tigera-operator-747864d56d-zzx2h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:07:59.797230 kubelet[1833]: I0813 01:07:59.797202 1833 kubelet.go:2405] "Pod admission denied" podUID="fb4ee81a-f259-4fb9-a76d-7a90c4598e0e" pod="tigera-operator/tigera-operator-747864d56d-7h69n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:07:59.827047 kubelet[1833]: I0813 01:07:59.826746 1833 kubelet.go:2405] "Pod admission denied" podUID="b41e3799-18a1-481b-8efb-42f438449a40" pod="tigera-operator/tigera-operator-747864d56d-w2n8m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:07:59.846702 kubelet[1833]: I0813 01:07:59.846448 1833 kubelet.go:2405] "Pod admission denied" podUID="a5f59ef8-b7b8-4aae-ad77-7547eb6075cf" pod="tigera-operator/tigera-operator-747864d56d-dd67j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:07:59.851102 kubelet[1833]: I0813 01:07:59.851081 1833 status_manager.go:895] "Failed to get status for pod" podUID="a5f59ef8-b7b8-4aae-ad77-7547eb6075cf" pod="tigera-operator/tigera-operator-747864d56d-dd67j" err="pods \"tigera-operator-747864d56d-dd67j\" is forbidden: User \"system:node:192.168.133.100\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '192.168.133.100' and this object" Aug 13 01:07:59.854257 kubelet[1833]: I0813 01:07:59.854236 1833 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gssk5\" (UniqueName: \"kubernetes.io/projected/7f26481a-205b-42bf-bb1f-48df3d99d8eb-kube-api-access-gssk5\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:07:59.854257 kubelet[1833]: I0813 01:07:59.854257 1833 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f26481a-205b-42bf-bb1f-48df3d99d8eb-var-lib-calico\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:08:00.086952 kubelet[1833]: I0813 01:08:00.086446 1833 scope.go:117] "RemoveContainer" containerID="df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3" Aug 13 01:08:00.089080 containerd[1478]: time="2025-08-13T01:08:00.088969276Z" level=info msg="RemoveContainer for \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\"" Aug 13 01:08:00.091868 containerd[1478]: time="2025-08-13T01:08:00.091847907Z" level=info msg="RemoveContainer for \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\" returns successfully" Aug 13 01:08:00.092047 kubelet[1833]: I0813 01:08:00.091977 1833 scope.go:117] "RemoveContainer" containerID="df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3" Aug 13 01:08:00.092248 containerd[1478]: time="2025-08-13T01:08:00.092171807Z" level=error msg="ContainerStatus for \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\": not found" Aug 13 01:08:00.092477 kubelet[1833]: E0813 01:08:00.092427 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\": not found" containerID="df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3" Aug 13 01:08:00.092556 kubelet[1833]: I0813 01:08:00.092482 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3"} err="failed to get container status \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"df93b82009a7417354db13c53471e5f35496ef2e12916833ba823643f81403a3\": not found" Aug 13 01:08:00.093379 systemd[1]: Removed slice kubepods-besteffort-pod7f26481a_205b_42bf_bb1f_48df3d99d8eb.slice - libcontainer container kubepods-besteffort-pod7f26481a_205b_42bf_bb1f_48df3d99d8eb.slice. Aug 13 01:08:00.093469 systemd[1]: kubepods-besteffort-pod7f26481a_205b_42bf_bb1f_48df3d99d8eb.slice: Consumed 637ms CPU time, 67.7M memory peak. Aug 13 01:08:00.110093 kubelet[1833]: I0813 01:08:00.110073 1833 kubelet.go:2405] "Pod admission denied" podUID="6ccada83-3604-46aa-a9c5-fd29ed700822" pod="tigera-operator/tigera-operator-747864d56d-5rcmf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.132484 kubelet[1833]: I0813 01:08:00.132466 1833 kubelet.go:2405] "Pod admission denied" podUID="2a469d83-0f30-4b23-8f53-f944e0d14a5d" pod="tigera-operator/tigera-operator-747864d56d-qmthj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.179391 kubelet[1833]: I0813 01:08:00.179266 1833 kubelet.go:2405] "Pod admission denied" podUID="53df1ee7-cb66-42c1-a921-96e02355635c" pod="tigera-operator/tigera-operator-747864d56d-54jxr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.213949 kubelet[1833]: I0813 01:08:00.213886 1833 kubelet.go:2405] "Pod admission denied" podUID="d395c3c9-c0cf-42bf-9bc8-3bb61d74bdf1" pod="tigera-operator/tigera-operator-747864d56d-r7czf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.243620 kubelet[1833]: I0813 01:08:00.243599 1833 kubelet.go:2405] "Pod admission denied" podUID="d430a97e-9e75-48f5-a881-f03a9ca70e24" pod="tigera-operator/tigera-operator-747864d56d-m7qzf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.277054 kubelet[1833]: I0813 01:08:00.277029 1833 kubelet.go:2405] "Pod admission denied" podUID="cb1d6350-2c8d-4a4c-b392-8cd26ad4d310" pod="tigera-operator/tigera-operator-747864d56d-nvcdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.304984 kubelet[1833]: I0813 01:08:00.304940 1833 kubelet.go:2405] "Pod admission denied" podUID="3407d7de-8281-4605-b587-fed8a77f2fc6" pod="tigera-operator/tigera-operator-747864d56d-l2jd5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.372981 kubelet[1833]: I0813 01:08:00.372954 1833 kubelet.go:2405] "Pod admission denied" podUID="0e999bcc-267e-4d6f-b91e-1b95b1ee280b" pod="tigera-operator/tigera-operator-747864d56d-cmkkh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.455827 kubelet[1833]: I0813 01:08:00.455784 1833 kubelet.go:2405] "Pod admission denied" podUID="f4873018-3505-4e7b-9abb-b688e13a5d59" pod="tigera-operator/tigera-operator-747864d56d-wqr6v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.656373 kubelet[1833]: I0813 01:08:00.656092 1833 kubelet.go:2405] "Pod admission denied" podUID="df118daa-576e-48e2-a49a-4d71e0033a96" pod="tigera-operator/tigera-operator-747864d56d-8bx9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.737376 kubelet[1833]: I0813 01:08:00.737332 1833 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-wmzmk"] Aug 13 01:08:00.746999 kubelet[1833]: I0813 01:08:00.746978 1833 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:00.747081 kubelet[1833]: I0813 01:08:00.747010 1833 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:08:00.748425 containerd[1478]: time="2025-08-13T01:08:00.748370642Z" level=info msg="StopPodSandbox for \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\"" Aug 13 01:08:00.748977 containerd[1478]: time="2025-08-13T01:08:00.748710592Z" level=info msg="TearDown network for sandbox \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\" successfully" Aug 13 01:08:00.748977 containerd[1478]: time="2025-08-13T01:08:00.748723102Z" level=info msg="StopPodSandbox for \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\" returns successfully" Aug 13 01:08:00.750364 containerd[1478]: time="2025-08-13T01:08:00.749103402Z" level=info msg="RemovePodSandbox for \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\"" Aug 13 01:08:00.750364 containerd[1478]: time="2025-08-13T01:08:00.749122622Z" level=info msg="Forcibly stopping sandbox \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\"" Aug 13 01:08:00.750364 containerd[1478]: time="2025-08-13T01:08:00.749167432Z" level=info msg="TearDown network for sandbox \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\" successfully" Aug 13 01:08:00.751754 containerd[1478]: time="2025-08-13T01:08:00.751732213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:08:00.751820 containerd[1478]: time="2025-08-13T01:08:00.751763723Z" level=info msg="RemovePodSandbox \"6fd7f25ac0e59e6b3fe79cfb3e10ba1f03bb1e8db23431f4f585b18e5dedb98f\" returns successfully" Aug 13 01:08:00.752120 kubelet[1833]: I0813 01:08:00.752106 1833 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:08:00.759187 kubelet[1833]: I0813 01:08:00.759110 1833 kubelet.go:2405] "Pod admission denied" podUID="0decf16d-eb9f-4b1f-af1b-ce5f350beb4a" pod="tigera-operator/tigera-operator-747864d56d-pgl87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.766644 kubelet[1833]: I0813 01:08:00.766627 1833 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:00.766757 kubelet[1833]: I0813 01:08:00.766737 1833 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-67866967cc-2lw7j","default/nginx-deployment-7fcdb87857-rf29h","calico-system/calico-node-zkd5h","kube-system/kube-proxy-qgfjh","calico-system/csi-node-driver-bc77x"] Aug 13 01:08:00.766854 kubelet[1833]: I0813 01:08:00.766840 1833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:08:00.767316 containerd[1478]: time="2025-08-13T01:08:00.767289889Z" level=info msg="StopContainer for \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\" with timeout 2 (s)" Aug 13 01:08:00.767632 containerd[1478]: time="2025-08-13T01:08:00.767613349Z" level=info msg="Stop container \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\" with signal terminated" Aug 13 01:08:00.777022 kubelet[1833]: E0813 01:08:00.776997 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:00.781830 systemd[1]: cri-containerd-e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34.scope: Deactivated successfully. Aug 13 01:08:00.804588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34-rootfs.mount: Deactivated successfully. Aug 13 01:08:00.807942 containerd[1478]: time="2025-08-13T01:08:00.807870795Z" level=info msg="shim disconnected" id=e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34 namespace=k8s.io Aug 13 01:08:00.808208 containerd[1478]: time="2025-08-13T01:08:00.807926296Z" level=warning msg="cleaning up after shim disconnected" id=e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34 namespace=k8s.io Aug 13 01:08:00.808208 containerd[1478]: time="2025-08-13T01:08:00.808202276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:08:00.821291 containerd[1478]: time="2025-08-13T01:08:00.821256770Z" level=info msg="StopContainer for \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\" returns successfully" Aug 13 01:08:00.821735 containerd[1478]: time="2025-08-13T01:08:00.821671300Z" level=info msg="StopPodSandbox for \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\"" Aug 13 01:08:00.821735 containerd[1478]: time="2025-08-13T01:08:00.821694560Z" level=info msg="Container to stop \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:08:00.823468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6-shm.mount: Deactivated successfully. Aug 13 01:08:00.830419 systemd[1]: cri-containerd-56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6.scope: Deactivated successfully. Aug 13 01:08:00.848224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6-rootfs.mount: Deactivated successfully. Aug 13 01:08:00.848692 containerd[1478]: time="2025-08-13T01:08:00.848581161Z" level=info msg="shim disconnected" id=56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6 namespace=k8s.io Aug 13 01:08:00.848692 containerd[1478]: time="2025-08-13T01:08:00.848648001Z" level=warning msg="cleaning up after shim disconnected" id=56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6 namespace=k8s.io Aug 13 01:08:00.848692 containerd[1478]: time="2025-08-13T01:08:00.848657131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:08:00.857038 kubelet[1833]: I0813 01:08:00.856955 1833 kubelet.go:2405] "Pod admission denied" podUID="d104ffc4-4ddf-4083-8a7e-23cbd4ad9561" pod="tigera-operator/tigera-operator-747864d56d-kpz2f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.915730 systemd-networkd[1393]: calif333340ed71: Link DOWN Aug 13 01:08:00.915739 systemd-networkd[1393]: calif333340ed71: Lost carrier Aug 13 01:08:00.957422 kubelet[1833]: I0813 01:08:00.956918 1833 kubelet.go:2405] "Pod admission denied" podUID="5810cb31-f727-4cc4-a71a-ab019b395ade" pod="tigera-operator/tigera-operator-747864d56d-8zsw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.914 [INFO][4601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.915 [INFO][4601] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" iface="eth0" netns="/var/run/netns/cni-fd434eaf-79c6-4db2-66ca-a0292b805bff" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.915 [INFO][4601] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" iface="eth0" netns="/var/run/netns/cni-fd434eaf-79c6-4db2-66ca-a0292b805bff" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.922 [INFO][4601] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" after=6.874033ms iface="eth0" netns="/var/run/netns/cni-fd434eaf-79c6-4db2-66ca-a0292b805bff" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.922 [INFO][4601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.922 [INFO][4601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.942 [INFO][4608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.942 [INFO][4608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.942 [INFO][4608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.980 [INFO][4608] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.980 [INFO][4608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.982 [INFO][4608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:00.987129 containerd[1478]: 2025-08-13 01:08:00.984 [INFO][4601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:00.989458 containerd[1478]: time="2025-08-13T01:08:00.989416526Z" level=info msg="TearDown network for sandbox \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\" successfully" Aug 13 01:08:00.989494 containerd[1478]: time="2025-08-13T01:08:00.989457486Z" level=info msg="StopPodSandbox for \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\" returns successfully" Aug 13 01:08:00.990209 systemd[1]: run-netns-cni\x2dfd434eaf\x2d79c6\x2d4db2\x2d66ca\x2da0292b805bff.mount: Deactivated successfully. Aug 13 01:08:00.995083 kubelet[1833]: I0813 01:08:00.995065 1833 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-67866967cc-2lw7j" Aug 13 01:08:00.995083 kubelet[1833]: I0813 01:08:00.995084 1833 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-67866967cc-2lw7j"] Aug 13 01:08:01.055430 kubelet[1833]: I0813 01:08:01.055383 1833 kubelet.go:2405] "Pod admission denied" podUID="01c80ff6-792d-41b9-9ff9-3d775ef9e073" pod="tigera-operator/tigera-operator-747864d56d-2hj2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:01.060594 kubelet[1833]: I0813 01:08:01.060577 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxjcw\" (UniqueName: \"kubernetes.io/projected/6b7cf428-2808-4afb-aea8-f874628caa6c-kube-api-access-vxjcw\") pod \"6b7cf428-2808-4afb-aea8-f874628caa6c\" (UID: \"6b7cf428-2808-4afb-aea8-f874628caa6c\") " Aug 13 01:08:01.060676 kubelet[1833]: I0813 01:08:01.060605 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b7cf428-2808-4afb-aea8-f874628caa6c-calico-apiserver-certs\") pod \"6b7cf428-2808-4afb-aea8-f874628caa6c\" (UID: \"6b7cf428-2808-4afb-aea8-f874628caa6c\") " Aug 13 01:08:01.065467 kubelet[1833]: I0813 01:08:01.065441 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b7cf428-2808-4afb-aea8-f874628caa6c-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "6b7cf428-2808-4afb-aea8-f874628caa6c" (UID: "6b7cf428-2808-4afb-aea8-f874628caa6c"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:08:01.065991 systemd[1]: var-lib-kubelet-pods-6b7cf428\x2d2808\x2d4afb\x2daea8\x2df874628caa6c-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:08:01.068241 kubelet[1833]: I0813 01:08:01.068203 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b7cf428-2808-4afb-aea8-f874628caa6c-kube-api-access-vxjcw" (OuterVolumeSpecName: "kube-api-access-vxjcw") pod "6b7cf428-2808-4afb-aea8-f874628caa6c" (UID: "6b7cf428-2808-4afb-aea8-f874628caa6c"). InnerVolumeSpecName "kube-api-access-vxjcw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:08:01.089279 kubelet[1833]: I0813 01:08:01.089256 1833 scope.go:117] "RemoveContainer" containerID="e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34" Aug 13 01:08:01.091151 containerd[1478]: time="2025-08-13T01:08:01.091068406Z" level=info msg="RemoveContainer for \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\"" Aug 13 01:08:01.094259 containerd[1478]: time="2025-08-13T01:08:01.094239808Z" level=info msg="RemoveContainer for \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\" returns successfully" Aug 13 01:08:01.094843 kubelet[1833]: I0813 01:08:01.094478 1833 scope.go:117] "RemoveContainer" containerID="e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34" Aug 13 01:08:01.094915 containerd[1478]: time="2025-08-13T01:08:01.094622268Z" level=error msg="ContainerStatus for \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\": not found" Aug 13 01:08:01.095427 kubelet[1833]: E0813 01:08:01.095398 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\": not found" containerID="e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34" Aug 13 01:08:01.095500 kubelet[1833]: I0813 01:08:01.095422 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34"} err="failed to get container status \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5e57adc7146ce517f374da2bc9ff5b4e79deea610eb7cea5600026b28fd5d34\": not found" Aug 13 01:08:01.095504 systemd[1]: Removed slice kubepods-besteffort-pod6b7cf428_2808_4afb_aea8_f874628caa6c.slice - libcontainer container kubepods-besteffort-pod6b7cf428_2808_4afb_aea8_f874628caa6c.slice. Aug 13 01:08:01.162916 kubelet[1833]: I0813 01:08:01.161734 1833 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxjcw\" (UniqueName: \"kubernetes.io/projected/6b7cf428-2808-4afb-aea8-f874628caa6c-kube-api-access-vxjcw\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:08:01.162916 kubelet[1833]: I0813 01:08:01.161755 1833 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b7cf428-2808-4afb-aea8-f874628caa6c-calico-apiserver-certs\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:08:01.307765 kubelet[1833]: I0813 01:08:01.307557 1833 kubelet.go:2405] "Pod admission denied" podUID="688b8dbe-4111-42db-9788-afcd3b901f19" pod="tigera-operator/tigera-operator-747864d56d-7d9l5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:01.406277 kubelet[1833]: I0813 01:08:01.406239 1833 kubelet.go:2405] "Pod admission denied" podUID="eb14b0c7-5ce6-4389-881e-59d7c2c69525" pod="tigera-operator/tigera-operator-747864d56d-btlkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:01.503911 kubelet[1833]: I0813 01:08:01.503865 1833 kubelet.go:2405] "Pod admission denied" podUID="e63ff0e7-df82-46f6-bb21-9220c68d7ba9" pod="tigera-operator/tigera-operator-747864d56d-pqzk2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:01.605512 kubelet[1833]: I0813 01:08:01.605485 1833 kubelet.go:2405] "Pod admission denied" podUID="c29f8167-efc1-4529-bfe2-174ec5132e5e" pod="tigera-operator/tigera-operator-747864d56d-648w4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:01.704907 kubelet[1833]: I0813 01:08:01.704873 1833 kubelet.go:2405] "Pod admission denied" podUID="9bbf0e78-616c-48c2-807c-dbb73948b5ea" pod="tigera-operator/tigera-operator-747864d56d-jrsq4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:01.777596 kubelet[1833]: E0813 01:08:01.777554 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:01.804636 systemd[1]: var-lib-kubelet-pods-6b7cf428\x2d2808\x2d4afb\x2daea8\x2df874628caa6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxjcw.mount: Deactivated successfully. Aug 13 01:08:01.806625 kubelet[1833]: I0813 01:08:01.806602 1833 kubelet.go:2405] "Pod admission denied" podUID="c5b1c33e-4279-41dc-9dfd-111aa7bd6a97" pod="tigera-operator/tigera-operator-747864d56d-rqxcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:01.907013 kubelet[1833]: I0813 01:08:01.906539 1833 kubelet.go:2405] "Pod admission denied" podUID="cd569fae-5fb1-4e57-a664-8b974a9d69fa" pod="tigera-operator/tigera-operator-747864d56d-kfdcn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:01.995960 kubelet[1833]: I0813 01:08:01.995914 1833 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-67866967cc-2lw7j"] Aug 13 01:08:02.003757 kubelet[1833]: I0813 01:08:02.003738 1833 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:02.003856 kubelet[1833]: I0813 01:08:02.003773 1833 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:08:02.005447 containerd[1478]: time="2025-08-13T01:08:02.004979357Z" level=info msg="StopPodSandbox for \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\"" Aug 13 01:08:02.005760 kubelet[1833]: I0813 01:08:02.005325 1833 kubelet.go:2405] "Pod admission denied" podUID="b9e6f47a-abb3-409e-86ca-bbc393f2b805" pod="tigera-operator/tigera-operator-747864d56d-mpgl9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.044 [INFO][4632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.045 [INFO][4632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" iface="eth0" netns="" Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.045 [INFO][4632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.045 [INFO][4632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.062 [INFO][4640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.062 [INFO][4640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.062 [INFO][4640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.067 [WARNING][4640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.067 [INFO][4640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.068 [INFO][4640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:02.072265 containerd[1478]: 2025-08-13 01:08:02.070 [INFO][4632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:02.073134 containerd[1478]: time="2025-08-13T01:08:02.072309744Z" level=info msg="TearDown network for sandbox \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\" successfully" Aug 13 01:08:02.073134 containerd[1478]: time="2025-08-13T01:08:02.072333924Z" level=info msg="StopPodSandbox for \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\" returns successfully" Aug 13 01:08:02.073134 containerd[1478]: time="2025-08-13T01:08:02.072690704Z" level=info msg="RemovePodSandbox for \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\"" Aug 13 01:08:02.073134 containerd[1478]: time="2025-08-13T01:08:02.072712034Z" level=info msg="Forcibly stopping sandbox \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\"" Aug 13 01:08:02.108001 kubelet[1833]: I0813 01:08:02.107965 1833 kubelet.go:2405] "Pod admission denied" podUID="14e6bfc9-1668-45a6-ac60-843831ebd2e9" pod="tigera-operator/tigera-operator-747864d56d-2gfvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.117 [INFO][4654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.117 [INFO][4654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" iface="eth0" netns="" Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.117 [INFO][4654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.117 [INFO][4654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.133 [INFO][4661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.134 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.134 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.141 [WARNING][4661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.141 [INFO][4661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" HandleID="k8s-pod-network.56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Workload="192.168.133.100-k8s-calico--apiserver--67866967cc--2lw7j-eth0" Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.142 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:02.145980 containerd[1478]: 2025-08-13 01:08:02.144 [INFO][4654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6" Aug 13 01:08:02.146374 containerd[1478]: time="2025-08-13T01:08:02.146347083Z" level=info msg="TearDown network for sandbox \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\" successfully" Aug 13 01:08:02.148869 containerd[1478]: time="2025-08-13T01:08:02.148848785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:08:02.148957 containerd[1478]: time="2025-08-13T01:08:02.148879765Z" level=info msg="RemovePodSandbox \"56265425e5bca094844b60b4460709bebef6d3dd234fe74fd40d656a635698c6\" returns successfully" Aug 13 01:08:02.149727 kubelet[1833]: I0813 01:08:02.149331 1833 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:08:02.156585 kubelet[1833]: I0813 01:08:02.156569 1833 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:02.156649 kubelet[1833]: I0813 01:08:02.156617 1833 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["default/nginx-deployment-7fcdb87857-rf29h","calico-system/calico-node-zkd5h","kube-system/kube-proxy-qgfjh","calico-system/csi-node-driver-bc77x"] Aug 13 01:08:02.157057 containerd[1478]: time="2025-08-13T01:08:02.156994188Z" level=info msg="StopContainer for \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\" with timeout 2 (s)" Aug 13 01:08:02.157784 containerd[1478]: time="2025-08-13T01:08:02.157766978Z" level=info msg="Stop container \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\" with signal quit" Aug 13 01:08:02.176781 systemd[1]: cri-containerd-153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32.scope: Deactivated successfully. Aug 13 01:08:02.197604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32-rootfs.mount: Deactivated successfully. Aug 13 01:08:02.200058 containerd[1478]: time="2025-08-13T01:08:02.200016445Z" level=info msg="shim disconnected" id=153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32 namespace=k8s.io Aug 13 01:08:02.200228 containerd[1478]: time="2025-08-13T01:08:02.200199245Z" level=warning msg="cleaning up after shim disconnected" id=153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32 namespace=k8s.io Aug 13 01:08:02.200228 containerd[1478]: time="2025-08-13T01:08:02.200215305Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:08:02.206721 kubelet[1833]: I0813 01:08:02.206680 1833 kubelet.go:2405] "Pod admission denied" podUID="e7d09b80-9013-4507-80fc-1ce9f7329e7d" pod="tigera-operator/tigera-operator-747864d56d-gmpx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:02.220674 containerd[1478]: time="2025-08-13T01:08:02.220638803Z" level=info msg="StopContainer for \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\" returns successfully" Aug 13 01:08:02.222941 containerd[1478]: time="2025-08-13T01:08:02.221250084Z" level=info msg="StopPodSandbox for \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\"" Aug 13 01:08:02.222941 containerd[1478]: time="2025-08-13T01:08:02.221273294Z" level=info msg="Container to stop \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:08:02.223381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816-shm.mount: Deactivated successfully. Aug 13 01:08:02.229874 systemd[1]: cri-containerd-61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816.scope: Deactivated successfully. Aug 13 01:08:02.250324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816-rootfs.mount: Deactivated successfully. Aug 13 01:08:02.250956 containerd[1478]: time="2025-08-13T01:08:02.250867325Z" level=info msg="shim disconnected" id=61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816 namespace=k8s.io Aug 13 01:08:02.251023 containerd[1478]: time="2025-08-13T01:08:02.250956296Z" level=warning msg="cleaning up after shim disconnected" id=61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816 namespace=k8s.io Aug 13 01:08:02.251023 containerd[1478]: time="2025-08-13T01:08:02.250965406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:08:02.307447 kubelet[1833]: I0813 01:08:02.307400 1833 kubelet.go:2405] "Pod admission denied" podUID="f0e32bd3-001a-4a8f-a84c-7b3059ddbc67" pod="tigera-operator/tigera-operator-747864d56d-fb5kt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:02.309458 systemd-networkd[1393]: cali1a3df9e0403: Link DOWN Aug 13 01:08:02.309465 systemd-networkd[1393]: cali1a3df9e0403: Lost carrier Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.307 [INFO][4739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.308 [INFO][4739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" iface="eth0" netns="/var/run/netns/cni-415e53e7-5816-47eb-9534-d5cb4de8766b" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.308 [INFO][4739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" iface="eth0" netns="/var/run/netns/cni-415e53e7-5816-47eb-9534-d5cb4de8766b" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.313 [INFO][4739] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" after=5.063333ms iface="eth0" netns="/var/run/netns/cni-415e53e7-5816-47eb-9534-d5cb4de8766b" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.313 [INFO][4739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.313 [INFO][4739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.332 [INFO][4747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.332 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.333 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.370 [INFO][4747] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.370 [INFO][4747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.371 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:02.375706 containerd[1478]: 2025-08-13 01:08:02.373 [INFO][4739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:02.377642 systemd[1]: run-netns-cni\x2d415e53e7\x2d5816\x2d47eb\x2d9534\x2dd5cb4de8766b.mount: Deactivated successfully. Aug 13 01:08:02.377976 containerd[1478]: time="2025-08-13T01:08:02.376050026Z" level=info msg="TearDown network for sandbox \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\" successfully" Aug 13 01:08:02.377976 containerd[1478]: time="2025-08-13T01:08:02.377929327Z" level=info msg="StopPodSandbox for \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\" returns successfully" Aug 13 01:08:02.382993 kubelet[1833]: I0813 01:08:02.382845 1833 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="default/nginx-deployment-7fcdb87857-rf29h" Aug 13 01:08:02.382993 kubelet[1833]: I0813 01:08:02.382864 1833 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["default/nginx-deployment-7fcdb87857-rf29h"] Aug 13 01:08:02.404048 kubelet[1833]: I0813 01:08:02.403712 1833 kubelet.go:2405] "Pod admission denied" podUID="7f54b45a-2f93-4a40-b7f0-02c7aa25f049" pod="tigera-operator/tigera-operator-747864d56d-tvn64" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:02.470580 kubelet[1833]: I0813 01:08:02.470480 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2rtm\" (UniqueName: \"kubernetes.io/projected/6538eb20-ef28-42cc-a8f0-f2d5f23ae51f-kube-api-access-r2rtm\") pod \"6538eb20-ef28-42cc-a8f0-f2d5f23ae51f\" (UID: \"6538eb20-ef28-42cc-a8f0-f2d5f23ae51f\") " Aug 13 01:08:02.473286 kubelet[1833]: I0813 01:08:02.473262 1833 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6538eb20-ef28-42cc-a8f0-f2d5f23ae51f-kube-api-access-r2rtm" (OuterVolumeSpecName: "kube-api-access-r2rtm") pod "6538eb20-ef28-42cc-a8f0-f2d5f23ae51f" (UID: "6538eb20-ef28-42cc-a8f0-f2d5f23ae51f"). InnerVolumeSpecName "kube-api-access-r2rtm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:08:02.571731 kubelet[1833]: I0813 01:08:02.571500 1833 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r2rtm\" (UniqueName: \"kubernetes.io/projected/6538eb20-ef28-42cc-a8f0-f2d5f23ae51f-kube-api-access-r2rtm\") on node \"192.168.133.100\" DevicePath \"\"" Aug 13 01:08:02.658238 kubelet[1833]: I0813 01:08:02.658203 1833 kubelet.go:2405] "Pod admission denied" podUID="7a5cffce-db2a-432a-8682-5a2a1fffb1a8" pod="tigera-operator/tigera-operator-747864d56d-lm7n5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:02.778967 kubelet[1833]: E0813 01:08:02.778812 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:02.804625 systemd[1]: var-lib-kubelet-pods-6538eb20\x2def28\x2d42cc\x2da8f0\x2df2d5f23ae51f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr2rtm.mount: Deactivated successfully. Aug 13 01:08:02.857357 systemd[1]: Removed slice kubepods-besteffort-pod6538eb20_ef28_42cc_a8f0_f2d5f23ae51f.slice - libcontainer container kubepods-besteffort-pod6538eb20_ef28_42cc_a8f0_f2d5f23ae51f.slice. Aug 13 01:08:02.905612 kubelet[1833]: I0813 01:08:02.905435 1833 kubelet.go:2405] "Pod admission denied" podUID="76a3d935-74cd-47c1-be03-b68ac2adc958" pod="tigera-operator/tigera-operator-747864d56d-qvtj5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.006484 kubelet[1833]: I0813 01:08:03.006435 1833 kubelet.go:2405] "Pod admission denied" podUID="6efc99ff-32f9-425b-a9fa-92574addb875" pod="tigera-operator/tigera-operator-747864d56d-dtt9h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.057353 kubelet[1833]: I0813 01:08:03.057254 1833 kubelet.go:2405] "Pod admission denied" podUID="47dbe931-8288-469c-87b8-21a0547826d1" pod="tigera-operator/tigera-operator-747864d56d-mbzwc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.103358 kubelet[1833]: I0813 01:08:03.102879 1833 scope.go:117] "RemoveContainer" containerID="153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32" Aug 13 01:08:03.104377 containerd[1478]: time="2025-08-13T01:08:03.104352849Z" level=info msg="RemoveContainer for \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\"" Aug 13 01:08:03.111170 containerd[1478]: time="2025-08-13T01:08:03.108038001Z" level=info msg="RemoveContainer for \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\" returns successfully" Aug 13 01:08:03.111170 containerd[1478]: time="2025-08-13T01:08:03.108306471Z" level=error msg="ContainerStatus for \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\": not found" Aug 13 01:08:03.111306 kubelet[1833]: I0813 01:08:03.108178 1833 scope.go:117] "RemoveContainer" containerID="153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32" Aug 13 01:08:03.111306 kubelet[1833]: E0813 01:08:03.108415 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\": not found" containerID="153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32" Aug 13 01:08:03.111306 kubelet[1833]: I0813 01:08:03.108436 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32"} err="failed to get container status \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\": rpc error: code = NotFound desc = an error occurred when try to find container \"153ebe15097621b9883917ace7f7741d1b5da07c9e20349c3dd672c1f366fb32\": not found" Aug 13 01:08:03.156389 kubelet[1833]: I0813 01:08:03.156366 1833 kubelet.go:2405] "Pod admission denied" podUID="715b0da7-16a2-449d-9982-e5d8f2bebfed" pod="tigera-operator/tigera-operator-747864d56d-ssxhc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.258965 kubelet[1833]: I0813 01:08:03.258155 1833 kubelet.go:2405] "Pod admission denied" podUID="bdb831c6-5850-4bd7-95f8-493344919ec6" pod="tigera-operator/tigera-operator-747864d56d-tmbsq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.306353 kubelet[1833]: I0813 01:08:03.306293 1833 kubelet.go:2405] "Pod admission denied" podUID="2b848a9c-1589-4681-9b21-3410bbf13230" pod="tigera-operator/tigera-operator-747864d56d-r5dcn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.383510 kubelet[1833]: I0813 01:08:03.383469 1833 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["default/nginx-deployment-7fcdb87857-rf29h"] Aug 13 01:08:03.391470 kubelet[1833]: I0813 01:08:03.391442 1833 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:03.391470 kubelet[1833]: I0813 01:08:03.391471 1833 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:08:03.392769 containerd[1478]: time="2025-08-13T01:08:03.392741787Z" level=info msg="StopPodSandbox for \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\"" Aug 13 01:08:03.407625 kubelet[1833]: I0813 01:08:03.407132 1833 kubelet.go:2405] "Pod admission denied" podUID="1c5ab1a7-b733-476a-a9b6-2a7a48995c30" pod="tigera-operator/tigera-operator-747864d56d-mlqpg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.432 [INFO][4767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.432 [INFO][4767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" iface="eth0" netns="" Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.432 [INFO][4767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.432 [INFO][4767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.455 [INFO][4774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.455 [INFO][4774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.455 [INFO][4774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.463 [WARNING][4774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.463 [INFO][4774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.465 [INFO][4774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:03.469805 containerd[1478]: 2025-08-13 01:08:03.467 [INFO][4767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:03.470267 containerd[1478]: time="2025-08-13T01:08:03.470230889Z" level=info msg="TearDown network for sandbox \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\" successfully" Aug 13 01:08:03.470267 containerd[1478]: time="2025-08-13T01:08:03.470257039Z" level=info msg="StopPodSandbox for \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\" returns successfully" Aug 13 01:08:03.471068 containerd[1478]: time="2025-08-13T01:08:03.470871599Z" level=info msg="RemovePodSandbox for \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\"" Aug 13 01:08:03.471150 containerd[1478]: time="2025-08-13T01:08:03.471134489Z" level=info msg="Forcibly stopping sandbox \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\"" Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.511 [INFO][4789] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.511 [INFO][4789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" iface="eth0" netns="" Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.511 [INFO][4789] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.511 [INFO][4789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.535 [INFO][4797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.535 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.535 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.543 [WARNING][4797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.543 [INFO][4797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" HandleID="k8s-pod-network.61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Workload="192.168.133.100-k8s-nginx--deployment--7fcdb87857--rf29h-eth0" Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.544 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:08:03.549331 containerd[1478]: 2025-08-13 01:08:03.547 [INFO][4789] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816" Aug 13 01:08:03.550002 containerd[1478]: time="2025-08-13T01:08:03.549364121Z" level=info msg="TearDown network for sandbox \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\" successfully" Aug 13 01:08:03.552494 containerd[1478]: time="2025-08-13T01:08:03.552258802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:08:03.552494 containerd[1478]: time="2025-08-13T01:08:03.552311002Z" level=info msg="RemovePodSandbox \"61300f6494998e8c255a542bf5360151f5139e73bb5f3c2732b3a273b4c3d816\" returns successfully" Aug 13 01:08:03.553028 kubelet[1833]: I0813 01:08:03.553001 1833 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:08:03.563334 kubelet[1833]: I0813 01:08:03.563294 1833 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:03.563392 kubelet[1833]: I0813 01:08:03.563345 1833 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-node-zkd5h","kube-system/kube-proxy-qgfjh","calico-system/csi-node-driver-bc77x"] Aug 13 01:08:03.563392 kubelet[1833]: E0813 01:08:03.563375 1833 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-zkd5h" Aug 13 01:08:03.563392 kubelet[1833]: E0813 01:08:03.563386 1833 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qgfjh" Aug 13 01:08:03.563462 kubelet[1833]: E0813 01:08:03.563397 1833 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bc77x" Aug 13 01:08:03.563462 kubelet[1833]: I0813 01:08:03.563408 1833 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:03.608478 kubelet[1833]: I0813 01:08:03.608453 1833 kubelet.go:2405] "Pod admission denied" podUID="d5ae0289-fe9b-48d2-9ed9-3ca12247d88f" pod="tigera-operator/tigera-operator-747864d56d-5dqmv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.707278 kubelet[1833]: I0813 01:08:03.707156 1833 kubelet.go:2405] "Pod admission denied" podUID="a40912a4-6087-4635-adc4-93b4bd794aec" pod="tigera-operator/tigera-operator-747864d56d-2kqwr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.779844 kubelet[1833]: E0813 01:08:03.779796 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:03.805922 kubelet[1833]: I0813 01:08:03.805875 1833 kubelet.go:2405] "Pod admission denied" podUID="eceec565-88f8-4e52-a2f3-67d78e36acd2" pod="tigera-operator/tigera-operator-747864d56d-z2nm8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:03.906282 kubelet[1833]: I0813 01:08:03.906257 1833 kubelet.go:2405] "Pod admission denied" podUID="b5b9762c-1f58-4ea3-ab37-c6e65921a9e8" pod="tigera-operator/tigera-operator-747864d56d-t7gsv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.007784 kubelet[1833]: I0813 01:08:04.007707 1833 kubelet.go:2405] "Pod admission denied" podUID="33f249ef-27d6-4bfb-9c28-e762dcbf86a5" pod="tigera-operator/tigera-operator-747864d56d-t5cr7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.108583 kubelet[1833]: I0813 01:08:04.108552 1833 kubelet.go:2405] "Pod admission denied" podUID="2975a943-2d0b-4ef5-b929-a6633c3f25da" pod="tigera-operator/tigera-operator-747864d56d-jnp2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.206801 kubelet[1833]: I0813 01:08:04.206640 1833 kubelet.go:2405] "Pod admission denied" podUID="7b73518d-f3ff-49d4-98af-3fcb8cca0e8f" pod="tigera-operator/tigera-operator-747864d56d-pd62t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.312286 kubelet[1833]: I0813 01:08:04.312204 1833 kubelet.go:2405] "Pod admission denied" podUID="e071ce0a-fb6c-4c23-8636-4e526607783d" pod="tigera-operator/tigera-operator-747864d56d-lxrz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.407492 kubelet[1833]: I0813 01:08:04.407472 1833 kubelet.go:2405] "Pod admission denied" podUID="b299df5d-47db-4eb0-b0c5-81d8aafd0785" pod="tigera-operator/tigera-operator-747864d56d-7vdtb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.513204 kubelet[1833]: I0813 01:08:04.513186 1833 kubelet.go:2405] "Pod admission denied" podUID="a000bf19-f79a-4ccd-89e7-ee645a16abbc" pod="tigera-operator/tigera-operator-747864d56d-kvr2r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.608388 kubelet[1833]: I0813 01:08:04.608365 1833 kubelet.go:2405] "Pod admission denied" podUID="fc8ea7c0-b9ba-49f2-a0a7-b0fbb8de3a34" pod="tigera-operator/tigera-operator-747864d56d-wzq5m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.780836 kubelet[1833]: E0813 01:08:04.780805 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:04.806116 kubelet[1833]: I0813 01:08:04.806095 1833 kubelet.go:2405] "Pod admission denied" podUID="40e42ac8-9ccb-4be7-8d44-f0a7e3235ae3" pod="tigera-operator/tigera-operator-747864d56d-f455d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.907587 kubelet[1833]: I0813 01:08:04.907266 1833 kubelet.go:2405] "Pod admission denied" podUID="6ac3a473-d51b-47b5-a995-487b258c305c" pod="tigera-operator/tigera-operator-747864d56d-9scnp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:04.956262 kubelet[1833]: I0813 01:08:04.956242 1833 kubelet.go:2405] "Pod admission denied" podUID="eb561d2d-9b4e-496b-bddf-07eab3d3ab63" pod="tigera-operator/tigera-operator-747864d56d-mjgm8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:05.060650 kubelet[1833]: I0813 01:08:05.060519 1833 kubelet.go:2405] "Pod admission denied" podUID="6af0e539-9d3c-4256-92d8-f6151242f922" pod="tigera-operator/tigera-operator-747864d56d-v49lg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:05.258118 kubelet[1833]: I0813 01:08:05.257870 1833 kubelet.go:2405] "Pod admission denied" podUID="a1ba938a-776b-4f40-9ee3-bb2e16e048d0" pod="tigera-operator/tigera-operator-747864d56d-6675d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:05.358591 kubelet[1833]: I0813 01:08:05.358563 1833 kubelet.go:2405] "Pod admission denied" podUID="9e422b6f-0f34-4719-99ed-11a4e81ddda4" pod="tigera-operator/tigera-operator-747864d56d-fnqhw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:05.418924 kubelet[1833]: I0813 01:08:05.417314 1833 kubelet.go:2405] "Pod admission denied" podUID="89e119eb-519e-4c9c-844f-d5a18f38ea5d" pod="tigera-operator/tigera-operator-747864d56d-7fkf6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:05.506517 kubelet[1833]: I0813 01:08:05.506484 1833 kubelet.go:2405] "Pod admission denied" podUID="14808859-b126-4862-9d70-5eeb0dc4a6fe" pod="tigera-operator/tigera-operator-747864d56d-5zvz8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:05.708314 kubelet[1833]: I0813 01:08:05.708276 1833 kubelet.go:2405] "Pod admission denied" podUID="daa1c622-5a90-4e19-80fa-612283c67c1b" pod="tigera-operator/tigera-operator-747864d56d-4rxwc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:05.781249 kubelet[1833]: E0813 01:08:05.781197 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:05.807725 kubelet[1833]: I0813 01:08:05.807697 1833 kubelet.go:2405] "Pod admission denied" podUID="c02a401c-1ba7-44c5-a495-6ea2e2b7b67a" pod="tigera-operator/tigera-operator-747864d56d-sg2p8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:05.907829 kubelet[1833]: I0813 01:08:05.907792 1833 kubelet.go:2405] "Pod admission denied" podUID="1d9e39ac-1abe-4f67-9b37-740464bb673b" pod="tigera-operator/tigera-operator-747864d56d-l5rfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.006014 kubelet[1833]: I0813 01:08:06.005800 1833 kubelet.go:2405] "Pod admission denied" podUID="7507be24-6e37-44d0-83f4-4bf3dd8c73d3" pod="tigera-operator/tigera-operator-747864d56d-65rnv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.106451 kubelet[1833]: I0813 01:08:06.106060 1833 kubelet.go:2405] "Pod admission denied" podUID="d5b66c3e-cae2-4c2f-9fb1-8f4b36c2e25f" pod="tigera-operator/tigera-operator-747864d56d-fgjh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.207959 kubelet[1833]: I0813 01:08:06.207920 1833 kubelet.go:2405] "Pod admission denied" podUID="7f093ce5-16c4-4105-98cb-470b1cd3458f" pod="tigera-operator/tigera-operator-747864d56d-9zz2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.307225 kubelet[1833]: I0813 01:08:06.307110 1833 kubelet.go:2405] "Pod admission denied" podUID="e24e8a88-7ca1-495f-b3c9-9336b2fd7248" pod="tigera-operator/tigera-operator-747864d56d-fbq2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.410958 kubelet[1833]: I0813 01:08:06.409393 1833 kubelet.go:2405] "Pod admission denied" podUID="9ff7f7ed-f076-4484-840f-2600f088fdc3" pod="tigera-operator/tigera-operator-747864d56d-ks47g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.508400 kubelet[1833]: I0813 01:08:06.508337 1833 kubelet.go:2405] "Pod admission denied" podUID="cdae2afc-b858-4472-8612-d9b8ed289427" pod="tigera-operator/tigera-operator-747864d56d-np4xz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.530169 update_engine[1460]: I20250813 01:08:06.530102 1460 update_attempter.cc:509] Updating boot flags... Aug 13 01:08:06.578922 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (4812) Aug 13 01:08:06.610943 kubelet[1833]: I0813 01:08:06.609596 1833 kubelet.go:2405] "Pod admission denied" podUID="123445ea-a6b3-420f-bbd6-c3e674e24e30" pod="tigera-operator/tigera-operator-747864d56d-ldf2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.652951 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (4816) Aug 13 01:08:06.709315 kubelet[1833]: I0813 01:08:06.709278 1833 kubelet.go:2405] "Pod admission denied" podUID="5f788da7-5ed7-40c6-b476-41dc9d812dca" pod="tigera-operator/tigera-operator-747864d56d-xbrg7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.782334 kubelet[1833]: E0813 01:08:06.782308 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:06.808029 kubelet[1833]: I0813 01:08:06.807992 1833 kubelet.go:2405] "Pod admission denied" podUID="1d79e1d2-1d89-4587-842a-e37a60e90434" pod="tigera-operator/tigera-operator-747864d56d-gmjxf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.856309 kubelet[1833]: I0813 01:08:06.856289 1833 kubelet.go:2405] "Pod admission denied" podUID="680a98b5-42e9-470e-ac88-f214f89b3092" pod="tigera-operator/tigera-operator-747864d56d-qhz46" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:06.957873 kubelet[1833]: I0813 01:08:06.957681 1833 kubelet.go:2405] "Pod admission denied" podUID="1cf92189-f1fd-4734-85c8-45c000800231" pod="tigera-operator/tigera-operator-747864d56d-bdcn4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:07.158882 kubelet[1833]: I0813 01:08:07.158836 1833 kubelet.go:2405] "Pod admission denied" podUID="b6555dcb-e5b9-44d4-aab1-bcdeb3251259" pod="tigera-operator/tigera-operator-747864d56d-tsqnz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:07.257894 kubelet[1833]: I0813 01:08:07.257805 1833 kubelet.go:2405] "Pod admission denied" podUID="ff844d42-4ec9-4a34-b968-e30375cca0bd" pod="tigera-operator/tigera-operator-747864d56d-6hscv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:07.358550 kubelet[1833]: I0813 01:08:07.358524 1833 kubelet.go:2405] "Pod admission denied" podUID="da6ac499-6935-41c6-9769-e0a904fce73e" pod="tigera-operator/tigera-operator-747864d56d-v4rnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:07.458526 kubelet[1833]: I0813 01:08:07.458488 1833 kubelet.go:2405] "Pod admission denied" podUID="f9afb401-985a-474d-8fd7-cdd11fed76a9" pod="tigera-operator/tigera-operator-747864d56d-pmjqz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:07.507010 kubelet[1833]: I0813 01:08:07.506965 1833 kubelet.go:2405] "Pod admission denied" podUID="e381058a-d8fd-45f8-a20f-f7c5ed6ed1bc" pod="tigera-operator/tigera-operator-747864d56d-bcqmb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:07.608419 kubelet[1833]: I0813 01:08:07.608392 1833 kubelet.go:2405] "Pod admission denied" podUID="8996755a-0ba0-4c82-b918-115e8cc062a5" pod="tigera-operator/tigera-operator-747864d56d-m4mk6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:07.783094 kubelet[1833]: E0813 01:08:07.783061 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:07.810032 kubelet[1833]: I0813 01:08:07.810010 1833 kubelet.go:2405] "Pod admission denied" podUID="141f15e7-0d34-4f68-8f3a-b7506381ee6f" pod="tigera-operator/tigera-operator-747864d56d-cxp6l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:07.908124 kubelet[1833]: I0813 01:08:07.907878 1833 kubelet.go:2405] "Pod admission denied" podUID="de94d767-5d9d-4ccc-b209-2691cce15ace" pod="tigera-operator/tigera-operator-747864d56d-lbtm6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.012887 kubelet[1833]: I0813 01:08:08.012856 1833 kubelet.go:2405] "Pod admission denied" podUID="c9376b1b-8af9-4c84-8fb8-a65a2020a02b" pod="tigera-operator/tigera-operator-747864d56d-pv6sz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.107599 kubelet[1833]: I0813 01:08:08.107549 1833 kubelet.go:2405] "Pod admission denied" podUID="bba7994b-15c5-4a31-8943-995f5cc60fb4" pod="tigera-operator/tigera-operator-747864d56d-s2c6p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.210589 kubelet[1833]: I0813 01:08:08.210368 1833 kubelet.go:2405] "Pod admission denied" podUID="fc9996a2-fbf2-4ea5-8feb-e8d3a624e186" pod="tigera-operator/tigera-operator-747864d56d-8jxkv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.317554 kubelet[1833]: I0813 01:08:08.317531 1833 kubelet.go:2405] "Pod admission denied" podUID="d630ac8a-454d-48d2-8af9-cd5fc4a5b8ee" pod="tigera-operator/tigera-operator-747864d56d-qwlkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.408042 kubelet[1833]: I0813 01:08:08.407998 1833 kubelet.go:2405] "Pod admission denied" podUID="9937b41f-ee3d-409e-b903-548e855e01c3" pod="tigera-operator/tigera-operator-747864d56d-gpr7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.609068 kubelet[1833]: I0813 01:08:08.609031 1833 kubelet.go:2405] "Pod admission denied" podUID="993bc7a7-6c17-4ea4-b423-e990d96e087e" pod="tigera-operator/tigera-operator-747864d56d-7tsp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.714187 kubelet[1833]: I0813 01:08:08.714163 1833 kubelet.go:2405] "Pod admission denied" podUID="a2a57c64-ea0a-4783-bd1e-2dc1d7d736b2" pod="tigera-operator/tigera-operator-747864d56d-hjzvc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.784052 kubelet[1833]: E0813 01:08:08.784013 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 01:08:08.815042 kubelet[1833]: I0813 01:08:08.815005 1833 kubelet.go:2405] "Pod admission denied" podUID="d03c5a2a-46cc-49b6-9f7b-4bb9f6971af6" pod="tigera-operator/tigera-operator-747864d56d-6586c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:08.911507 kubelet[1833]: I0813 01:08:08.911422 1833 kubelet.go:2405] "Pod admission denied" podUID="b54cd052-97a3-440e-8543-a8bb15eedd1e" pod="tigera-operator/tigera-operator-747864d56d-2nwbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:09.014497 kubelet[1833]: I0813 01:08:09.014454 1833 kubelet.go:2405] "Pod admission denied" podUID="e1e1144c-d57e-4b6f-b99c-86536c7fabaf" pod="tigera-operator/tigera-operator-747864d56d-zrnjq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:09.208724 kubelet[1833]: I0813 01:08:09.208521 1833 kubelet.go:2405] "Pod admission denied" podUID="7ea990c3-5de0-4a17-9a62-adc8bd33e3ed" pod="tigera-operator/tigera-operator-747864d56d-tqvqg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:09.306678 kubelet[1833]: I0813 01:08:09.306655 1833 kubelet.go:2405] "Pod admission denied" podUID="73a7a0bc-86d7-4e75-8de8-6cec4fab2c4f" pod="tigera-operator/tigera-operator-747864d56d-94qfz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:09.407682 kubelet[1833]: I0813 01:08:09.407359 1833 kubelet.go:2405] "Pod admission denied" podUID="fe6388a6-80e9-4979-b314-b06c4ab7ffa6" pod="tigera-operator/tigera-operator-747864d56d-nvm9s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:08:09.509187 kubelet[1833]: I0813 01:08:09.509078 1833 kubelet.go:2405] "Pod admission denied" podUID="a88ce049-dfce-423c-8ff4-464096fe9e0c" pod="tigera-operator/tigera-operator-747864d56d-scxn6" reason="Evicted" message="The node had condition: [DiskPressure]. "