Dec 12 18:44:56.053950 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:44:56.053979 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:44:56.053989 kernel: BIOS-provided physical RAM map: Dec 12 18:44:56.053996 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 12 18:44:56.054002 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 12 18:44:56.054009 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:44:56.054019 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 12 18:44:56.054026 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 12 18:44:56.054038 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:44:56.054045 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:44:56.054051 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:44:56.054058 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:44:56.054064 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 12 18:44:56.054071 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:44:56.054081 kernel: NX (Execute Disable) protection: active Dec 12 18:44:56.054088 kernel: APIC: Static calls initialized Dec 12 18:44:56.054101 kernel: SMBIOS 2.8 present. Dec 12 18:44:56.054108 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 12 18:44:56.054115 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:44:56.054122 kernel: Hypervisor detected: KVM Dec 12 18:44:56.054131 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:44:56.054138 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:44:56.054145 kernel: kvm-clock: using sched offset of 9634298470 cycles Dec 12 18:44:56.054152 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:44:56.054160 kernel: tsc: Detected 2000.000 MHz processor Dec 12 18:44:56.054167 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:44:56.054175 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:44:56.054182 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 12 18:44:56.054189 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:44:56.054196 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:44:56.054206 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:44:56.054213 kernel: Using GB pages for direct mapping Dec 12 18:44:56.054220 kernel: ACPI: Early table checksum verification disabled Dec 12 18:44:56.054227 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 12 18:44:56.054234 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:44:56.054241 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:44:56.054248 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:44:56.054255 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:44:56.054262 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:44:56.054272 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:44:56.054283 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:44:56.054291 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:44:56.054298 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 12 18:44:56.054306 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 12 18:44:56.054316 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:44:56.054324 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 12 18:44:56.054331 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 12 18:44:56.054338 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 12 18:44:56.054346 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 12 18:44:56.054353 kernel: No NUMA configuration found Dec 12 18:44:56.054360 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 12 18:44:56.054368 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Dec 12 18:44:56.054375 kernel: Zone ranges: Dec 12 18:44:56.054386 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:44:56.054393 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:44:56.054401 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:44:56.054408 kernel: Device empty Dec 12 18:44:56.054416 kernel: Movable zone start for each node Dec 12 18:44:56.054424 kernel: Early memory node ranges Dec 12 18:44:56.054432 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:44:56.054439 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 12 18:44:56.054451 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:44:56.054461 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 12 18:44:56.054473 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:44:56.054480 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:44:56.054488 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 12 18:44:56.054499 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:44:56.054507 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:44:56.054514 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:44:56.054522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:44:56.054529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:44:56.054539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:44:56.054547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:44:56.054554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:44:56.054562 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:44:56.054569 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:44:56.054577 kernel: TSC deadline timer available Dec 12 18:44:56.054584 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:44:56.054592 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:44:56.054599 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:44:56.054607 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:44:56.054617 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:44:56.055666 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:44:56.055674 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:44:56.055682 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:44:56.055689 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:44:56.055697 kernel: kvm-guest: setup PV sched yield Dec 12 18:44:56.055704 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:44:56.055711 kernel: Booting paravirtualized kernel on KVM Dec 12 18:44:56.055718 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:44:56.055730 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:44:56.055737 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:44:56.055744 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:44:56.055752 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:44:56.055759 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:44:56.055766 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:44:56.055774 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:44:56.055782 kernel: random: crng init done Dec 12 18:44:56.055792 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:44:56.055799 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:44:56.055806 kernel: Fallback order for Node 0: 0 Dec 12 18:44:56.055814 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 12 18:44:56.055821 kernel: Policy zone: Normal Dec 12 18:44:56.055828 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:44:56.055835 kernel: software IO TLB: area num 2. Dec 12 18:44:56.055842 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:44:56.055849 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:44:56.056009 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:44:56.056016 kernel: Dynamic Preempt: voluntary Dec 12 18:44:56.056028 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:44:56.056036 kernel: rcu: RCU event tracing is enabled. Dec 12 18:44:56.056044 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:44:56.056051 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:44:56.056058 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:44:56.056066 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:44:56.056073 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:44:56.056084 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:44:56.056091 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:44:56.056107 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:44:56.056118 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:44:56.056125 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:44:56.056133 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:44:56.056140 kernel: Console: colour VGA+ 80x25 Dec 12 18:44:56.056148 kernel: printk: legacy console [tty0] enabled Dec 12 18:44:56.056159 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:44:56.056167 kernel: ACPI: Core revision 20240827 Dec 12 18:44:56.056177 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:44:56.056185 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:44:56.056192 kernel: x2apic enabled Dec 12 18:44:56.056199 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:44:56.056207 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:44:56.056214 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:44:56.056222 kernel: kvm-guest: setup PV IPIs Dec 12 18:44:56.056232 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:44:56.056240 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:44:56.056247 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 12 18:44:56.056255 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:44:56.056262 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:44:56.056270 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:44:56.056277 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:44:56.056285 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:44:56.056292 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:44:56.056303 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:44:56.056311 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:44:56.056318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:44:56.056326 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:44:56.056334 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:44:56.056342 kernel: active return thunk: srso_alias_return_thunk Dec 12 18:44:56.056349 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:44:56.056356 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 12 18:44:56.056367 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:44:56.056374 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:44:56.056382 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:44:56.056390 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:44:56.056397 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:44:56.056405 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:44:56.056417 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 12 18:44:56.056430 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 12 18:44:56.056447 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:44:56.056459 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:44:56.056473 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:44:56.056485 kernel: landlock: Up and running. Dec 12 18:44:56.056493 kernel: SELinux: Initializing. Dec 12 18:44:56.056500 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:44:56.056508 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:44:56.056515 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 12 18:44:56.056523 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:44:56.056534 kernel: ... version: 0 Dec 12 18:44:56.056541 kernel: ... bit width: 48 Dec 12 18:44:56.056548 kernel: ... generic registers: 6 Dec 12 18:44:56.056556 kernel: ... value mask: 0000ffffffffffff Dec 12 18:44:56.056563 kernel: ... max period: 00007fffffffffff Dec 12 18:44:56.056571 kernel: ... fixed-purpose events: 0 Dec 12 18:44:56.056578 kernel: ... event mask: 000000000000003f Dec 12 18:44:56.056586 kernel: signal: max sigframe size: 3376 Dec 12 18:44:56.056593 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:44:56.056601 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:44:56.056611 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:44:56.057594 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:44:56.057609 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:44:56.057618 kernel: .... node #0, CPUs: #1 Dec 12 18:44:56.057646 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:44:56.057661 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 12 18:44:56.057669 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235480K reserved, 0K cma-reserved) Dec 12 18:44:56.057676 kernel: devtmpfs: initialized Dec 12 18:44:56.057684 kernel: x86/mm: Memory block size: 128MB Dec 12 18:44:56.057698 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:44:56.057706 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:44:56.057713 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:44:56.057721 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:44:56.057728 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:44:56.057736 kernel: audit: type=2000 audit(1765565092.293:1): state=initialized audit_enabled=0 res=1 Dec 12 18:44:56.057743 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:44:56.057751 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:44:56.057758 kernel: cpuidle: using governor menu Dec 12 18:44:56.057769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:44:56.057777 kernel: dca service started, version 1.12.1 Dec 12 18:44:56.057784 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:44:56.057792 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:44:56.057799 kernel: PCI: Using configuration type 1 for base access Dec 12 18:44:56.057807 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:44:56.057814 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:44:56.057826 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:44:56.057834 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:44:56.057844 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:44:56.057852 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:44:56.057859 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:44:56.057867 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:44:56.057874 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:44:56.057882 kernel: ACPI: Interpreter enabled Dec 12 18:44:56.057889 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:44:56.057897 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:44:56.057904 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:44:56.057914 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:44:56.057922 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:44:56.057930 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:44:56.058205 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:44:56.058366 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:44:56.058516 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:44:56.058526 kernel: PCI host bridge to bus 0000:00 Dec 12 18:44:56.058750 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:44:56.058896 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:44:56.059076 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:44:56.059213 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:44:56.059371 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:44:56.059534 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 12 18:44:56.061359 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:44:56.061574 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:44:56.061786 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:44:56.061942 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:44:56.062117 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:44:56.062261 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:44:56.062404 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:44:56.062585 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:44:56.062940 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:44:56.063088 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:44:56.063232 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:44:56.063412 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:44:56.063567 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:44:56.063822 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:44:56.064327 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:44:56.064695 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:44:56.065045 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:44:56.065221 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:44:56.065397 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:44:56.065712 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 12 18:44:56.065876 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:44:56.066085 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:44:56.066236 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:44:56.066252 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:44:56.066261 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:44:56.066269 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:44:56.066277 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:44:56.066284 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:44:56.066297 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:44:56.066305 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:44:56.066313 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:44:56.066320 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:44:56.066329 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:44:56.066336 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:44:56.066344 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:44:56.066352 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:44:56.066360 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:44:56.066371 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:44:56.066378 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:44:56.066386 kernel: iommu: Default domain type: Translated Dec 12 18:44:56.066394 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:44:56.066402 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:44:56.066410 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:44:56.066417 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 12 18:44:56.066425 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 12 18:44:56.066575 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:44:56.067126 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:44:56.067282 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:44:56.067292 kernel: vgaarb: loaded Dec 12 18:44:56.067300 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:44:56.067308 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:44:56.067316 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:44:56.067323 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:44:56.067331 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:44:56.067339 kernel: pnp: PnP ACPI init Dec 12 18:44:56.067540 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:44:56.067552 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:44:56.067560 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:44:56.067567 kernel: NET: Registered PF_INET protocol family Dec 12 18:44:56.067575 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:44:56.067582 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:44:56.067590 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:44:56.067597 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:44:56.067609 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:44:56.067617 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:44:56.067641 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:44:56.067649 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:44:56.067656 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:44:56.067664 kernel: NET: Registered PF_XDP protocol family Dec 12 18:44:56.067809 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:44:56.067947 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:44:56.068091 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:44:56.068228 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:44:56.068363 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:44:56.068499 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 12 18:44:56.068509 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:44:56.068517 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:44:56.068524 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 12 18:44:56.068532 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:44:56.068540 kernel: Initialise system trusted keyrings Dec 12 18:44:56.068551 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:44:56.068559 kernel: Key type asymmetric registered Dec 12 18:44:56.068567 kernel: Asymmetric key parser 'x509' registered Dec 12 18:44:56.068574 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:44:56.068582 kernel: io scheduler mq-deadline registered Dec 12 18:44:56.068589 kernel: io scheduler kyber registered Dec 12 18:44:56.068597 kernel: io scheduler bfq registered Dec 12 18:44:56.068604 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:44:56.068612 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:44:56.068637 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:44:56.068645 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:44:56.068652 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:44:56.068848 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:44:56.068856 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:44:56.068864 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:44:56.068871 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:44:56.069065 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:44:56.069243 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:44:56.069400 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:44:55 UTC (1765565095) Dec 12 18:44:56.069546 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:44:56.069556 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:44:56.069564 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:44:56.069571 kernel: Segment Routing with IPv6 Dec 12 18:44:56.069579 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:44:56.069586 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:44:56.069594 kernel: Key type dns_resolver registered Dec 12 18:44:56.069606 kernel: IPI shorthand broadcast: enabled Dec 12 18:44:56.069614 kernel: sched_clock: Marking stable (4559003270, 364425990)->(5079523780, -156094520) Dec 12 18:44:56.069638 kernel: registered taskstats version 1 Dec 12 18:44:56.069646 kernel: Loading compiled-in X.509 certificates Dec 12 18:44:56.069654 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:44:56.069661 kernel: Demotion targets for Node 0: null Dec 12 18:44:56.069669 kernel: Key type .fscrypt registered Dec 12 18:44:56.069677 kernel: Key type fscrypt-provisioning registered Dec 12 18:44:56.069684 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:44:56.069695 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:44:56.069703 kernel: ima: No architecture policies found Dec 12 18:44:56.069711 kernel: clk: Disabling unused clocks Dec 12 18:44:56.069718 kernel: Warning: unable to open an initial console. Dec 12 18:44:56.069726 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:44:56.069734 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:44:56.069741 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:44:56.069749 kernel: Run /init as init process Dec 12 18:44:56.069759 kernel: with arguments: Dec 12 18:44:56.069766 kernel: /init Dec 12 18:44:56.069774 kernel: with environment: Dec 12 18:44:56.069801 kernel: HOME=/ Dec 12 18:44:56.069812 kernel: TERM=linux Dec 12 18:44:56.069821 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:44:56.069831 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:44:56.069840 systemd[1]: Detected virtualization kvm. Dec 12 18:44:56.069851 systemd[1]: Detected architecture x86-64. Dec 12 18:44:56.069859 systemd[1]: Running in initrd. Dec 12 18:44:56.069867 systemd[1]: No hostname configured, using default hostname. Dec 12 18:44:56.069875 systemd[1]: Hostname set to . Dec 12 18:44:56.069883 systemd[1]: Initializing machine ID from random generator. Dec 12 18:44:56.069891 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:44:56.069900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:44:56.069908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:44:56.069919 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:44:56.069931 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:44:56.069945 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:44:56.069960 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:44:56.069976 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:44:56.069994 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:44:56.070003 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:44:56.070014 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:44:56.070023 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:44:56.070031 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:44:56.070040 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:44:56.070048 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:44:56.070056 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:44:56.070064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:44:56.070258 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:44:56.070269 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:44:56.070277 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:44:56.070285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:44:56.070296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:44:56.070305 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:44:56.070313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:44:56.070324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:44:56.070332 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:44:56.070341 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:44:56.070349 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:44:56.070357 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:44:56.070365 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:44:56.070373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:44:56.070382 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:44:56.070393 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:44:56.070438 systemd-journald[186]: Collecting audit messages is disabled. Dec 12 18:44:56.070462 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:44:56.070471 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:44:56.070480 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:44:56.070488 systemd-journald[186]: Journal started Dec 12 18:44:56.070508 systemd-journald[186]: Runtime Journal (/run/log/journal/ce043aaa34b54a73ac15f35e2e250d64) is 8M, max 78.2M, 70.2M free. Dec 12 18:44:56.025744 systemd-modules-load[187]: Inserted module 'overlay' Dec 12 18:44:56.098425 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:44:56.098445 kernel: Bridge firewalling registered Dec 12 18:44:56.096606 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 12 18:44:56.108748 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:44:56.111179 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:44:56.119132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:44:56.210757 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:44:56.216798 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:44:56.223911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:44:56.230986 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:44:56.234220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:44:56.237372 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:44:56.238585 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:44:56.247402 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:44:56.250365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:44:56.267019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:44:56.269564 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:44:56.295892 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:44:56.305580 systemd-resolved[216]: Positive Trust Anchors: Dec 12 18:44:56.306492 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:44:56.306521 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:44:56.313950 systemd-resolved[216]: Defaulting to hostname 'linux'. Dec 12 18:44:56.315471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:44:56.316839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:44:56.400679 kernel: SCSI subsystem initialized Dec 12 18:44:56.411645 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:44:56.429663 kernel: iscsi: registered transport (tcp) Dec 12 18:44:56.453329 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:44:56.453391 kernel: QLogic iSCSI HBA Driver Dec 12 18:44:56.477070 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:44:56.495418 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:44:56.499024 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:44:56.572860 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:44:56.575854 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:44:56.630651 kernel: raid6: avx2x4 gen() 33972 MB/s Dec 12 18:44:56.649646 kernel: raid6: avx2x2 gen() 30988 MB/s Dec 12 18:44:56.668051 kernel: raid6: avx2x1 gen() 22911 MB/s Dec 12 18:44:56.668079 kernel: raid6: using algorithm avx2x4 gen() 33972 MB/s Dec 12 18:44:56.690872 kernel: raid6: .... xor() 4572 MB/s, rmw enabled Dec 12 18:44:56.690913 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:44:56.712843 kernel: xor: automatically using best checksumming function avx Dec 12 18:44:56.937662 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:44:56.946789 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:44:56.949714 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:44:56.980858 systemd-udevd[435]: Using default interface naming scheme 'v255'. Dec 12 18:44:56.987462 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:44:56.994705 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:44:57.023794 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Dec 12 18:44:57.059781 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:44:57.063580 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:44:57.159249 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:44:57.166395 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:44:57.251859 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:44:57.256646 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 12 18:44:57.379771 kernel: scsi host0: Virtio SCSI HBA Dec 12 18:44:57.421649 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 18:44:57.429666 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:44:57.464429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:44:57.475558 kernel: AES CTR mode by8 optimization enabled Dec 12 18:44:57.475581 kernel: libata version 3.00 loaded. Dec 12 18:44:57.464568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:44:57.488115 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:44:57.496999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:44:57.503033 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:44:57.513649 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:44:57.517666 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:44:57.524301 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:44:57.524530 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 12 18:44:57.524948 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:44:57.536677 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 12 18:44:57.536926 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:44:57.537802 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 12 18:44:57.538013 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 12 18:44:57.540650 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:44:57.544682 kernel: scsi host1: ahci Dec 12 18:44:57.548666 kernel: scsi host2: ahci Dec 12 18:44:57.548934 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:44:57.553550 kernel: GPT:9289727 != 167739391 Dec 12 18:44:57.553584 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:44:57.557549 kernel: GPT:9289727 != 167739391 Dec 12 18:44:57.557578 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:44:57.560045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:44:57.564652 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 12 18:44:57.565075 kernel: scsi host3: ahci Dec 12 18:44:57.567556 kernel: scsi host4: ahci Dec 12 18:44:57.589483 kernel: scsi host5: ahci Dec 12 18:44:57.589867 kernel: scsi host6: ahci Dec 12 18:44:57.590164 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Dec 12 18:44:57.590188 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Dec 12 18:44:57.590207 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Dec 12 18:44:57.590225 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Dec 12 18:44:57.590244 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Dec 12 18:44:57.590261 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Dec 12 18:44:57.679714 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 18:44:57.750406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:44:57.767926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 18:44:57.789476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:44:57.797342 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 12 18:44:57.798147 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 18:44:57.802086 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:44:57.840063 disk-uuid[611]: Primary Header is updated. Dec 12 18:44:57.840063 disk-uuid[611]: Secondary Entries is updated. Dec 12 18:44:57.840063 disk-uuid[611]: Secondary Header is updated. Dec 12 18:44:57.849657 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:44:57.863659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:44:57.939173 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:44:57.939239 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:44:57.939252 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:44:57.939263 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:44:57.939274 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:44:57.942217 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:44:58.079089 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:44:58.092010 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:44:58.093219 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:44:58.095350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:44:58.099885 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:44:58.134048 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:44:58.867953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:44:58.869038 disk-uuid[612]: The operation has completed successfully. Dec 12 18:44:58.938276 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:44:58.938421 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:44:58.978544 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:44:58.996671 sh[639]: Success Dec 12 18:44:59.018659 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:44:59.018713 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:44:59.024656 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:44:59.034698 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:44:59.079436 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:44:59.084728 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:44:59.099708 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:44:59.113847 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (651) Dec 12 18:44:59.118770 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:44:59.119011 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:44:59.131894 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:44:59.131928 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:44:59.134671 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:44:59.139167 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:44:59.140703 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:44:59.142074 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:44:59.143949 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:44:59.146963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:44:59.194248 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (684) Dec 12 18:44:59.194296 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:44:59.198859 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:44:59.207658 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:44:59.207905 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:44:59.212429 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:44:59.221688 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:44:59.224345 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:44:59.227366 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:44:59.310117 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:44:59.315756 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:44:59.425743 systemd-networkd[820]: lo: Link UP Dec 12 18:44:59.428002 systemd-networkd[820]: lo: Gained carrier Dec 12 18:44:59.432268 systemd-networkd[820]: Enumeration completed Dec 12 18:44:59.433373 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:44:59.433381 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:44:59.471123 systemd-networkd[820]: eth0: Link UP Dec 12 18:44:59.472273 systemd-networkd[820]: eth0: Gained carrier Dec 12 18:44:59.472291 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:44:59.490224 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:44:59.496825 systemd[1]: Reached target network.target - Network. Dec 12 18:44:59.634723 ignition[755]: Ignition 2.22.0 Dec 12 18:44:59.634740 ignition[755]: Stage: fetch-offline Dec 12 18:44:59.634776 ignition[755]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:44:59.634787 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:44:59.639056 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:44:59.634867 ignition[755]: parsed url from cmdline: "" Dec 12 18:44:59.634873 ignition[755]: no config URL provided Dec 12 18:44:59.634878 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:44:59.634888 ignition[755]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:44:59.642759 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:44:59.634894 ignition[755]: failed to fetch config: resource requires networking Dec 12 18:44:59.635285 ignition[755]: Ignition finished successfully Dec 12 18:44:59.708415 ignition[829]: Ignition 2.22.0 Dec 12 18:44:59.708443 ignition[829]: Stage: fetch Dec 12 18:44:59.708565 ignition[829]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:44:59.708577 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:44:59.708702 ignition[829]: parsed url from cmdline: "" Dec 12 18:44:59.708708 ignition[829]: no config URL provided Dec 12 18:44:59.708714 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:44:59.708724 ignition[829]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:44:59.708748 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 12 18:44:59.716501 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:44:59.917129 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 12 18:44:59.917337 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:45:00.317614 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 12 18:45:00.318072 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:45:00.385733 systemd-networkd[820]: eth0: DHCPv4 address 172.237.139.56/24, gateway 172.237.139.1 acquired from 23.210.200.68 Dec 12 18:45:00.845817 systemd-networkd[820]: eth0: Gained IPv6LL Dec 12 18:45:01.118758 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 12 18:45:01.212496 ignition[829]: PUT result: OK Dec 12 18:45:01.212566 ignition[829]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 12 18:45:01.319614 ignition[829]: GET result: OK Dec 12 18:45:01.319726 ignition[829]: parsing config with SHA512: 999be3fef6c617382bbcba8928db46bce68a61eeedff50e694357de4eed050224ecaa6477e75cae2ffd2790328837cebd768feed8f250fc5770c5b8f10644736 Dec 12 18:45:01.322304 unknown[829]: fetched base config from "system" Dec 12 18:45:01.322317 unknown[829]: fetched base config from "system" Dec 12 18:45:01.322476 ignition[829]: fetch: fetch complete Dec 12 18:45:01.322323 unknown[829]: fetched user config from "akamai" Dec 12 18:45:01.322482 ignition[829]: fetch: fetch passed Dec 12 18:45:01.327751 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:45:01.322528 ignition[829]: Ignition finished successfully Dec 12 18:45:01.350818 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:45:01.417970 ignition[837]: Ignition 2.22.0 Dec 12 18:45:01.417986 ignition[837]: Stage: kargs Dec 12 18:45:01.418107 ignition[837]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:01.418118 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:01.428466 ignition[837]: kargs: kargs passed Dec 12 18:45:01.428528 ignition[837]: Ignition finished successfully Dec 12 18:45:01.432290 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:45:01.434300 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:45:01.475304 ignition[843]: Ignition 2.22.0 Dec 12 18:45:01.475320 ignition[843]: Stage: disks Dec 12 18:45:01.475440 ignition[843]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:01.475452 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:01.477661 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:45:01.475940 ignition[843]: disks: disks passed Dec 12 18:45:01.478930 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:45:01.475985 ignition[843]: Ignition finished successfully Dec 12 18:45:01.480315 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:45:01.481830 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:45:01.483228 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:45:01.484800 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:45:01.487116 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:45:01.519153 systemd-fsck[851]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:45:01.524206 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:45:01.527507 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:45:01.652645 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:45:01.653411 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:45:01.654754 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:45:01.657510 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:45:01.661715 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:45:01.663743 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:45:01.664949 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:45:01.664982 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:45:01.674281 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:45:01.678010 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:45:01.687690 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (859) Dec 12 18:45:01.694759 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:01.694815 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:01.702743 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:45:01.702784 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:45:01.707754 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:45:01.713117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:45:01.760553 initrd-setup-root[883]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:45:01.768889 initrd-setup-root[890]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:45:01.777696 initrd-setup-root[897]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:45:01.783517 initrd-setup-root[904]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:45:01.914608 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:45:01.918909 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:45:01.921769 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:45:01.942925 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:45:01.948645 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:01.971574 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:45:02.004958 ignition[972]: INFO : Ignition 2.22.0 Dec 12 18:45:02.004958 ignition[972]: INFO : Stage: mount Dec 12 18:45:02.007084 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:02.007084 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:02.007084 ignition[972]: INFO : mount: mount passed Dec 12 18:45:02.007084 ignition[972]: INFO : Ignition finished successfully Dec 12 18:45:02.008478 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:45:02.010736 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:45:02.655319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:45:02.679679 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (984) Dec 12 18:45:02.683775 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:02.683822 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:02.693702 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:45:02.693747 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:45:02.693769 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:45:02.698938 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:45:02.746256 ignition[1000]: INFO : Ignition 2.22.0 Dec 12 18:45:02.746256 ignition[1000]: INFO : Stage: files Dec 12 18:45:02.748333 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:02.748333 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:02.758339 ignition[1000]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:45:02.759603 ignition[1000]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:45:02.759603 ignition[1000]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:45:02.763671 ignition[1000]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:45:02.764832 ignition[1000]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:45:02.766164 unknown[1000]: wrote ssh authorized keys file for user: core Dec 12 18:45:02.767179 ignition[1000]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:45:02.768194 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:45:02.768194 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:45:02.771185 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:45:02.772316 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:45:02.772316 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:45:02.775006 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:45:02.775006 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:45:02.775006 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 18:45:03.324531 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 12 18:45:04.526465 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:45:04.526465 ignition[1000]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 12 18:45:04.529874 ignition[1000]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:45:04.529874 ignition[1000]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:45:04.529874 ignition[1000]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 12 18:45:04.529874 ignition[1000]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:45:04.558855 ignition[1000]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:45:04.558855 ignition[1000]: INFO : files: files passed Dec 12 18:45:04.558855 ignition[1000]: INFO : Ignition finished successfully Dec 12 18:45:04.535677 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:45:04.558819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:45:04.562567 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:45:04.573603 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:45:04.574542 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:45:04.581659 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:04.581659 initrd-setup-root-after-ignition[1030]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:04.584135 initrd-setup-root-after-ignition[1034]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:04.586809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:45:04.589039 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:45:04.590870 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:45:04.658213 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:45:04.658356 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:45:04.660112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:45:04.661402 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:45:04.663062 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:45:04.663990 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:45:04.703214 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:45:04.705547 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:45:04.724722 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:45:04.726445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:45:04.728230 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:45:04.729760 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:45:04.729878 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:45:04.732009 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:45:04.732997 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:45:04.734460 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:45:04.735883 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:45:04.737472 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:45:04.739197 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:45:04.740826 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:45:04.742423 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:45:04.744077 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:45:04.745715 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:45:04.747282 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:45:04.748669 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:45:04.748789 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:45:04.750543 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:45:04.751607 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:45:04.753122 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:45:04.753525 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:45:04.754793 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:45:04.754956 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:45:04.756779 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:45:04.756899 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:45:04.757925 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:45:04.758074 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:45:04.760730 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:45:04.762121 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:45:04.763735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:45:04.766802 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:45:04.768817 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:45:04.769585 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:45:04.773192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:45:04.773297 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:45:04.782799 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:45:04.782918 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:45:04.797818 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:45:04.823580 ignition[1054]: INFO : Ignition 2.22.0 Dec 12 18:45:04.823580 ignition[1054]: INFO : Stage: umount Dec 12 18:45:04.823580 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:04.823580 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:04.823580 ignition[1054]: INFO : umount: umount passed Dec 12 18:45:04.823580 ignition[1054]: INFO : Ignition finished successfully Dec 12 18:45:04.826187 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:45:04.826332 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:45:04.827794 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:45:04.827900 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:45:04.830332 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:45:04.830434 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:45:04.832048 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:45:04.832121 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:45:04.833557 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:45:04.833610 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:45:04.835024 systemd[1]: Stopped target network.target - Network. Dec 12 18:45:04.836431 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:45:04.836490 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:45:04.838003 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:45:04.839411 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:45:04.839744 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:45:04.841012 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:45:04.842475 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:45:04.843971 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:45:04.844029 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:45:04.845423 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:45:04.845476 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:45:04.846816 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:45:04.846878 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:45:04.871286 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:45:04.871359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:45:04.872871 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:45:04.872932 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:45:04.874664 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:45:04.876144 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:45:04.885380 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:45:04.885594 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:45:04.892269 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:45:04.892670 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:45:04.892856 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:45:04.895726 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:45:04.896672 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:45:04.897930 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:45:04.898001 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:45:04.900655 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:45:04.901351 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:45:04.901423 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:45:04.905180 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:45:04.905261 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:45:04.908780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:45:04.908865 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:45:04.909814 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:45:04.909873 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:45:04.913760 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:45:04.919550 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:45:04.919660 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:04.933761 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:45:04.933904 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:45:04.935951 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:45:04.936181 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:45:04.938262 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:45:04.938345 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:45:04.940182 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:45:04.940227 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:45:04.941745 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:45:04.941802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:45:04.943992 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:45:04.944062 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:45:04.945684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:45:04.945768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:45:04.948041 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:45:04.950470 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:45:04.950543 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:45:04.952310 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:45:04.952366 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:45:04.954129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:45:04.954203 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:04.957133 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:45:04.957202 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:45:04.957257 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:04.966489 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:45:04.967693 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:45:04.968774 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:45:04.971043 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:45:05.007231 systemd[1]: Switching root. Dec 12 18:45:05.046533 systemd-journald[186]: Journal stopped Dec 12 18:45:06.517407 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 12 18:45:06.517443 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:45:06.517456 kernel: SELinux: policy capability open_perms=1 Dec 12 18:45:06.517466 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:45:06.517476 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:45:06.517489 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:45:06.517500 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:45:06.517510 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:45:06.517520 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:45:06.517530 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:45:06.517540 kernel: audit: type=1403 audit(1765565105.222:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:45:06.517551 systemd[1]: Successfully loaded SELinux policy in 81.241ms. Dec 12 18:45:06.517566 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.993ms. Dec 12 18:45:06.517578 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:45:06.517589 systemd[1]: Detected virtualization kvm. Dec 12 18:45:06.517600 systemd[1]: Detected architecture x86-64. Dec 12 18:45:06.517614 systemd[1]: Detected first boot. Dec 12 18:45:06.518372 systemd[1]: Initializing machine ID from random generator. Dec 12 18:45:06.518395 zram_generator::config[1102]: No configuration found. Dec 12 18:45:06.518408 kernel: Guest personality initialized and is inactive Dec 12 18:45:06.518424 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:45:06.518439 kernel: Initialized host personality Dec 12 18:45:06.518455 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:45:06.518472 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:45:06.518491 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:45:06.518502 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:45:06.518513 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:45:06.518524 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:45:06.518535 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:45:06.518546 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:45:06.518558 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:45:06.518571 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:45:06.518583 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:45:06.518595 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:45:06.518606 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:45:06.518617 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:45:06.518667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:45:06.518680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:45:06.518691 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:45:06.518707 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:45:06.518722 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:45:06.518734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:45:06.518746 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:45:06.518757 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:45:06.518768 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:45:06.518780 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:45:06.518794 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:45:06.518805 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:45:06.518816 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:45:06.518827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:45:06.518838 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:45:06.518850 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:45:06.518861 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:45:06.518872 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:45:06.518883 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:45:06.518897 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:45:06.518909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:45:06.518921 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:45:06.518932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:45:06.518947 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:45:06.518958 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:45:06.518969 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:45:06.518981 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:45:06.518992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:06.519003 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:45:06.519015 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:45:06.519030 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:45:06.519049 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:45:06.519061 systemd[1]: Reached target machines.target - Containers. Dec 12 18:45:06.519072 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:45:06.519084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:06.519101 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:45:06.519118 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:45:06.519134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:45:06.519146 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:45:06.519157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:45:06.519173 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:45:06.519184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:45:06.519195 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:45:06.519207 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:45:06.519219 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:45:06.519230 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:45:06.519241 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:45:06.519253 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:06.519267 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:45:06.519279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:45:06.519290 kernel: loop: module loaded Dec 12 18:45:06.519301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:45:06.519318 kernel: fuse: init (API version 7.41) Dec 12 18:45:06.519335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:45:06.519353 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:45:06.519365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:45:06.519380 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:45:06.519391 systemd[1]: Stopped verity-setup.service. Dec 12 18:45:06.519403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:06.519414 kernel: ACPI: bus type drm_connector registered Dec 12 18:45:06.519425 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:45:06.519436 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:45:06.519448 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:45:06.519460 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:45:06.519471 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:45:06.519485 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:45:06.519496 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:45:06.519508 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:45:06.519549 systemd-journald[1182]: Collecting audit messages is disabled. Dec 12 18:45:06.519574 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:45:06.519586 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:45:06.519597 systemd-journald[1182]: Journal started Dec 12 18:45:06.519617 systemd-journald[1182]: Runtime Journal (/run/log/journal/ddfaa01670f24f72aad594e67236d70d) is 8M, max 78.2M, 70.2M free. Dec 12 18:45:06.522401 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:45:05.997326 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:45:06.024506 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:45:06.025100 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:45:06.528728 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:45:06.531714 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:45:06.533598 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:45:06.534188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:45:06.535491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:45:06.535872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:45:06.537206 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:45:06.537499 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:45:06.538955 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:45:06.539286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:45:06.540656 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:45:06.542185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:45:06.543460 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:45:06.544841 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:45:06.562444 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:45:06.565718 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:45:06.569772 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:45:06.571700 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:45:06.571732 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:45:06.574546 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:45:06.589779 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:45:06.591464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:06.594334 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:45:06.599747 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:45:06.600537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:45:06.601719 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:45:06.602522 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:45:06.605832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:45:06.645758 systemd-journald[1182]: Time spent on flushing to /var/log/journal/ddfaa01670f24f72aad594e67236d70d is 159.214ms for 987 entries. Dec 12 18:45:06.645758 systemd-journald[1182]: System Journal (/var/log/journal/ddfaa01670f24f72aad594e67236d70d) is 8M, max 195.6M, 187.6M free. Dec 12 18:45:06.872688 systemd-journald[1182]: Received client request to flush runtime journal. Dec 12 18:45:06.640003 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:45:06.648043 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:45:06.657456 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:45:06.658495 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:45:06.676538 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:45:06.684509 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:45:06.689133 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:45:06.918608 kernel: loop0: detected capacity change from 0 to 128560 Dec 12 18:45:07.078472 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:45:07.097287 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:45:07.100453 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:45:07.124294 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:45:07.143318 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:45:07.145806 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:45:07.162718 kernel: loop1: detected capacity change from 0 to 229808 Dec 12 18:45:07.180667 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:45:07.221256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:45:07.252409 kernel: loop2: detected capacity change from 0 to 8 Dec 12 18:45:07.289462 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:45:07.294901 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 12 18:45:07.295549 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 12 18:45:07.310851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:45:07.351711 kernel: loop4: detected capacity change from 0 to 128560 Dec 12 18:45:07.384694 kernel: loop5: detected capacity change from 0 to 229808 Dec 12 18:45:07.565837 kernel: loop6: detected capacity change from 0 to 8 Dec 12 18:45:07.572680 kernel: loop7: detected capacity change from 0 to 110984 Dec 12 18:45:07.617804 (sd-merge)[1247]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 12 18:45:07.621670 (sd-merge)[1247]: Merged extensions into '/usr'. Dec 12 18:45:07.627330 systemd[1]: Reload requested from client PID 1223 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:45:07.627703 systemd[1]: Reloading... Dec 12 18:45:07.844154 zram_generator::config[1269]: No configuration found. Dec 12 18:45:08.103992 ldconfig[1218]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:45:08.147455 systemd[1]: Reloading finished in 519 ms. Dec 12 18:45:08.177854 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:45:08.179103 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:45:08.180228 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:45:08.190088 systemd[1]: Starting ensure-sysext.service... Dec 12 18:45:08.194750 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:45:08.197945 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:45:08.234178 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:45:08.235497 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:45:08.235943 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:45:08.236348 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:45:08.237412 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:45:08.237790 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 12 18:45:08.238098 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 12 18:45:08.244861 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:45:08.245161 systemd-tmpfiles[1318]: Skipping /boot Dec 12 18:45:08.261498 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:45:08.261592 systemd-tmpfiles[1318]: Skipping /boot Dec 12 18:45:08.284755 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:45:08.292856 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:45:08.296905 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:45:08.307285 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:45:08.313521 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:45:08.316693 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:45:08.319470 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:45:08.319485 systemd[1]: Reloading... Dec 12 18:45:08.345511 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Dec 12 18:45:08.434667 zram_generator::config[1363]: No configuration found. Dec 12 18:45:08.470800 augenrules[1383]: No rules Dec 12 18:45:08.780319 systemd[1]: Reloading finished in 460 ms. Dec 12 18:45:08.784661 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:45:08.792716 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:45:08.795177 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:45:08.795452 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:45:08.797313 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:45:08.805654 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:45:08.816362 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:45:08.830676 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:45:08.834760 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:45:08.846858 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:45:08.857650 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:45:08.862445 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:45:08.865070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:08.865347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:08.868693 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:45:08.872034 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:45:08.876103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:45:08.877204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:08.877312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:08.880478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:45:08.884888 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:45:08.894026 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:45:08.895000 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:45:08.895097 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:08.901443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:08.905159 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:45:08.907172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:08.911117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:45:08.913131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:08.913233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:08.913352 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:45:08.913424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:08.916391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:45:08.953140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:45:08.956513 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:45:08.956824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:45:08.960850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:45:08.961445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:45:08.968453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:45:08.968663 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:45:08.975847 systemd[1]: Finished ensure-sysext.service. Dec 12 18:45:08.980323 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:45:08.981662 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:45:08.982196 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:45:08.984129 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:45:09.007966 augenrules[1480]: /sbin/augenrules: No change Dec 12 18:45:09.024174 augenrules[1507]: No rules Dec 12 18:45:09.026939 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:45:09.027485 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:45:09.039154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:09.075653 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:45:09.184206 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:45:09.336242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:09.376545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:45:09.392789 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:45:09.431218 systemd-networkd[1477]: lo: Link UP Dec 12 18:45:09.431230 systemd-networkd[1477]: lo: Gained carrier Dec 12 18:45:09.436245 systemd-networkd[1477]: Enumeration completed Dec 12 18:45:09.436347 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:45:09.440352 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:09.440363 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:45:09.441404 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:45:09.441497 systemd-networkd[1477]: eth0: Link UP Dec 12 18:45:09.441731 systemd-networkd[1477]: eth0: Gained carrier Dec 12 18:45:09.441746 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:09.444012 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:45:09.447589 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:45:09.448524 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:45:09.458290 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:45:09.478527 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:45:09.481318 systemd-resolved[1325]: Positive Trust Anchors: Dec 12 18:45:09.481668 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:45:09.481769 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:45:09.486859 systemd-resolved[1325]: Defaulting to hostname 'linux'. Dec 12 18:45:09.489477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:45:09.490448 systemd[1]: Reached target network.target - Network. Dec 12 18:45:09.491319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:45:09.492137 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:45:09.492999 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:45:09.493825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:45:09.494593 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:45:09.495563 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:45:09.496428 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:45:09.497207 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:45:09.497978 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:45:09.498014 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:45:09.498705 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:45:09.500454 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:45:09.503081 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:45:09.506110 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:45:09.507101 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:45:09.507873 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:45:09.510840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:45:09.512483 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:45:09.513924 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:45:09.515389 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:45:09.516109 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:45:09.516865 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:45:09.516906 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:45:09.518183 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:45:09.521770 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:45:09.531325 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:45:09.534879 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:45:09.540500 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:45:09.552399 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:45:09.553723 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:45:09.555244 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:45:09.558545 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:45:09.563966 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:45:09.567871 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:45:09.583679 jq[1543]: false Dec 12 18:45:09.584849 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:45:09.588061 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:45:09.590961 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:45:09.599325 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:45:09.606302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:45:09.625134 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:45:09.627322 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:45:09.627781 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:45:09.629421 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:45:09.629700 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:45:09.904770 oslogin_cache_refresh[1545]: Refreshing passwd entry cache Dec 12 18:45:09.925342 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:45:09.926134 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Refreshing passwd entry cache Dec 12 18:45:09.926134 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Failure getting users, quitting Dec 12 18:45:09.926134 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:45:09.926134 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Refreshing group entry cache Dec 12 18:45:09.926134 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Failure getting groups, quitting Dec 12 18:45:09.926134 google_oslogin_nss_cache[1545]: oslogin_cache_refresh[1545]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:45:09.911199 oslogin_cache_refresh[1545]: Failure getting users, quitting Dec 12 18:45:09.911226 oslogin_cache_refresh[1545]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:45:09.911272 oslogin_cache_refresh[1545]: Refreshing group entry cache Dec 12 18:45:09.921181 oslogin_cache_refresh[1545]: Failure getting groups, quitting Dec 12 18:45:09.921193 oslogin_cache_refresh[1545]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:45:09.950476 update_engine[1552]: I20251212 18:45:09.946435 1552 main.cc:92] Flatcar Update Engine starting Dec 12 18:45:09.955196 extend-filesystems[1544]: Found /dev/sda6 Dec 12 18:45:09.958709 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:45:09.996967 extend-filesystems[1544]: Found /dev/sda9 Dec 12 18:45:09.999230 dbus-daemon[1541]: [system] SELinux support is enabled Dec 12 18:45:10.001983 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:45:10.004472 coreos-metadata[1540]: Dec 12 18:45:10.004 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:45:10.005556 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:45:10.005592 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:45:10.006410 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:45:10.006438 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:45:10.010456 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:45:10.011508 extend-filesystems[1544]: Checking size of /dev/sda9 Dec 12 18:45:10.012676 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:45:10.015883 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:45:10.017642 update_engine[1552]: I20251212 18:45:10.016850 1552 update_check_scheduler.cc:74] Next update check in 11m40s Dec 12 18:45:10.018997 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:45:10.030200 jq[1553]: true Dec 12 18:45:10.039160 extend-filesystems[1544]: Resized partition /dev/sda9 Dec 12 18:45:10.064070 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 12 18:45:10.064492 extend-filesystems[1584]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:45:10.044132 systemd-logind[1549]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:45:10.044158 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:45:10.061040 systemd-logind[1549]: New seat seat0. Dec 12 18:45:10.063845 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:45:10.072116 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:45:10.087981 jq[1586]: true Dec 12 18:45:10.415450 systemd-networkd[1477]: eth0: DHCPv4 address 172.237.139.56/24, gateway 172.237.139.1 acquired from 23.210.200.68 Dec 12 18:45:10.419463 dbus-daemon[1541]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1477 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:45:10.426131 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Dec 12 18:45:10.429976 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:45:10.509771 systemd-networkd[1477]: eth0: Gained IPv6LL Dec 12 18:45:10.510727 bash[1607]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:45:10.512577 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:45:10.517900 systemd[1]: Starting sshkeys.service... Dec 12 18:45:10.519746 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:45:10.522186 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:45:11.543472 systemd-resolved[1325]: Clock change detected. Flushing caches. Dec 12 18:45:11.543858 systemd-timesyncd[1490]: Contacted time server 69.89.207.99:123 (0.flatcar.pool.ntp.org). Dec 12 18:45:11.543928 systemd-timesyncd[1490]: Initial clock synchronization to Fri 2025-12-12 18:45:11.543418 UTC. Dec 12 18:45:11.546948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:11.549907 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:45:11.608921 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:45:11.614303 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:45:11.758027 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:45:11.777769 dbus-daemon[1541]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:45:11.795114 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:45:11.801884 dbus-daemon[1541]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1602 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:45:11.837181 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:45:11.887390 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:45:11.940484 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 12 18:45:11.955882 extend-filesystems[1584]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:45:11.955882 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 12 18:45:11.955882 extend-filesystems[1584]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 12 18:45:11.964894 extend-filesystems[1544]: Resized filesystem in /dev/sda9 Dec 12 18:45:11.968289 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:45:11.968576 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:45:11.974770 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:45:11.994625 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:45:12.019488 coreos-metadata[1614]: Dec 12 18:45:12.018 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:45:12.033805 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:45:12.043379 systemd[1]: Started sshd@0-172.237.139.56:22-139.178.68.195:40272.service - OpenSSH per-connection server daemon (139.178.68.195:40272). Dec 12 18:45:12.057020 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:45:12.070729 coreos-metadata[1540]: Dec 12 18:45:12.070 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:45:12.083774 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:45:12.084517 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:45:12.092329 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:45:12.167389 coreos-metadata[1614]: Dec 12 18:45:12.166 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 12 18:45:12.194869 coreos-metadata[1540]: Dec 12 18:45:12.194 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 12 18:45:12.225460 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:45:12.238296 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:45:12.245444 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:45:12.246623 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:45:12.282728 polkitd[1628]: Started polkitd version 126 Dec 12 18:45:12.403957 coreos-metadata[1614]: Dec 12 18:45:12.403 INFO Fetch successful Dec 12 18:45:12.414320 polkitd[1628]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:45:12.417158 polkitd[1628]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:45:12.418994 polkitd[1628]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:45:12.419231 polkitd[1628]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:45:12.419256 polkitd[1628]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:45:12.419303 polkitd[1628]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:45:12.428080 polkitd[1628]: Finished loading, compiling and executing 2 rules Dec 12 18:45:12.428422 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:45:12.432146 dbus-daemon[1541]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:45:12.434116 polkitd[1628]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:45:12.450376 coreos-metadata[1540]: Dec 12 18:45:12.450 INFO Fetch successful Dec 12 18:45:12.450593 coreos-metadata[1540]: Dec 12 18:45:12.450 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 12 18:45:12.450670 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:45:12.454725 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:45:12.462327 systemd[1]: Finished sshkeys.service. Dec 12 18:45:12.469061 systemd-resolved[1325]: System hostname changed to '172-237-139-56'. Dec 12 18:45:12.469226 systemd-hostnamed[1602]: Hostname set to <172-237-139-56> (transient) Dec 12 18:45:12.473548 containerd[1574]: time="2025-12-12T18:45:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:45:12.474302 containerd[1574]: time="2025-12-12T18:45:12.474268756Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:45:12.504342 containerd[1574]: time="2025-12-12T18:45:12.504297926Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.44µs" Dec 12 18:45:12.504342 containerd[1574]: time="2025-12-12T18:45:12.504334466Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:45:12.504342 containerd[1574]: time="2025-12-12T18:45:12.504352936Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:45:12.504795 containerd[1574]: time="2025-12-12T18:45:12.504772876Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:45:12.504825 containerd[1574]: time="2025-12-12T18:45:12.504796266Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:45:12.504888 containerd[1574]: time="2025-12-12T18:45:12.504828776Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:45:12.504974 containerd[1574]: time="2025-12-12T18:45:12.504950556Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:45:12.504974 containerd[1574]: time="2025-12-12T18:45:12.504971046Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:45:12.505488 containerd[1574]: time="2025-12-12T18:45:12.505454626Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:45:12.505521 containerd[1574]: time="2025-12-12T18:45:12.505486106Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:45:12.505521 containerd[1574]: time="2025-12-12T18:45:12.505499856Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:45:12.505521 containerd[1574]: time="2025-12-12T18:45:12.505507826Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:45:12.505670 containerd[1574]: time="2025-12-12T18:45:12.505648146Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:45:12.506116 containerd[1574]: time="2025-12-12T18:45:12.506077296Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:45:12.506168 containerd[1574]: time="2025-12-12T18:45:12.506123396Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:45:12.506168 containerd[1574]: time="2025-12-12T18:45:12.506157316Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:45:12.506494 containerd[1574]: time="2025-12-12T18:45:12.506408706Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:45:12.506665 containerd[1574]: time="2025-12-12T18:45:12.506647906Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:45:12.506750 containerd[1574]: time="2025-12-12T18:45:12.506727846Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509030086Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509103706Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509120066Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509145796Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509157876Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509167096Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509178666Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509189316Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509208836Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509239986Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509249856Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509260706Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509396346Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:45:12.513968 containerd[1574]: time="2025-12-12T18:45:12.509421836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509446686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509458566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509468956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509478536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509488356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509497246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509507266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509516546Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509525786Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509590826Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509604056Z" level=info msg="Start snapshots syncer" Dec 12 18:45:12.514631 containerd[1574]: time="2025-12-12T18:45:12.509639726Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:45:12.514871 containerd[1574]: time="2025-12-12T18:45:12.509905436Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:45:12.514871 containerd[1574]: time="2025-12-12T18:45:12.509961686Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510008936Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510154246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510391896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510404196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510413536Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510425776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510435126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510444526Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510468356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510478066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510487276Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510516196Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510529036Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:45:12.515194 containerd[1574]: time="2025-12-12T18:45:12.510537166Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510545696Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510553026Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510563396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510579046Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510601516Z" level=info msg="runtime interface created" Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510607736Z" level=info msg="created NRI interface" Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510615236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510624846Z" level=info msg="Connect containerd service" Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.510644716Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:45:12.515434 containerd[1574]: time="2025-12-12T18:45:12.511720106Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:45:12.712060 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:12.800973 coreos-metadata[1540]: Dec 12 18:45:12.795 INFO Fetch successful Dec 12 18:45:12.801066 sshd[1650]: Accepted publickey for core from 139.178.68.195 port 40272 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:12.686150 systemd[1]: Started sshd@1-172.237.139.56:22-45.121.214.115:40610.service - OpenSSH per-connection server daemon (45.121.214.115:40610). Dec 12 18:45:12.863385 systemd[1]: Started sshd@2-172.237.139.56:22-176.15.7.153:7925.service - OpenSSH per-connection server daemon (176.15.7.153:7925). Dec 12 18:45:12.901579 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:45:12.905311 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:45:12.918869 sshd[1684]: Connection closed by 45.121.214.115 port 40610 [preauth] Dec 12 18:45:12.933463 systemd[1]: sshd@1-172.237.139.56:22-45.121.214.115:40610.service: Deactivated successfully. Dec 12 18:45:12.965800 systemd-logind[1549]: New session 1 of user core. Dec 12 18:45:13.013987 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:45:13.026277 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:45:13.045587 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:45:13.059110 systemd-logind[1549]: New session c1 of user core. Dec 12 18:45:13.071891 containerd[1574]: time="2025-12-12T18:45:13.071049806Z" level=info msg="Start subscribing containerd event" Dec 12 18:45:13.071891 containerd[1574]: time="2025-12-12T18:45:13.071136416Z" level=info msg="Start recovering state" Dec 12 18:45:13.072452 containerd[1574]: time="2025-12-12T18:45:13.072120306Z" level=info msg="Start event monitor" Dec 12 18:45:13.072452 containerd[1574]: time="2025-12-12T18:45:13.072143376Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:45:13.072452 containerd[1574]: time="2025-12-12T18:45:13.072157326Z" level=info msg="Start streaming server" Dec 12 18:45:13.072452 containerd[1574]: time="2025-12-12T18:45:13.072207236Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:45:13.072452 containerd[1574]: time="2025-12-12T18:45:13.072217006Z" level=info msg="runtime interface starting up..." Dec 12 18:45:13.072452 containerd[1574]: time="2025-12-12T18:45:13.072224066Z" level=info msg="starting plugins..." Dec 12 18:45:13.072452 containerd[1574]: time="2025-12-12T18:45:13.072246606Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:45:13.085342 containerd[1574]: time="2025-12-12T18:45:13.083589976Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:45:13.085342 containerd[1574]: time="2025-12-12T18:45:13.083668316Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:45:13.085342 containerd[1574]: time="2025-12-12T18:45:13.083762536Z" level=info msg="containerd successfully booted in 0.610780s" Dec 12 18:45:13.088929 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:45:13.176055 systemd[1]: Started sshd@3-172.237.139.56:22-107.151.216.141:51842.service - OpenSSH per-connection server daemon (107.151.216.141:51842). Dec 12 18:45:13.282658 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:45:13.284479 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:45:13.291435 sshd[1719]: ssh_dispatch_run_fatal: Connection from UNKNOWN port 65535: Broken pipe [preauth] Dec 12 18:45:13.296943 systemd[1]: sshd@3-172.237.139.56:22-107.151.216.141:51842.service: Deactivated successfully. Dec 12 18:45:13.309762 systemd[1705]: Queued start job for default target default.target. Dec 12 18:45:13.316741 systemd[1705]: Created slice app.slice - User Application Slice. Dec 12 18:45:13.317201 systemd[1705]: Reached target paths.target - Paths. Dec 12 18:45:13.317345 systemd[1705]: Reached target timers.target - Timers. Dec 12 18:45:13.319945 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:45:13.333136 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:45:13.334064 systemd[1705]: Reached target sockets.target - Sockets. Dec 12 18:45:13.334399 systemd[1705]: Reached target basic.target - Basic System. Dec 12 18:45:13.334519 systemd[1705]: Reached target default.target - Main User Target. Dec 12 18:45:13.334644 systemd[1705]: Startup finished in 261ms. Dec 12 18:45:13.339765 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:45:13.351228 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:45:13.679539 systemd[1]: Started sshd@4-172.237.139.56:22-89.40.242.20:26649.service - OpenSSH per-connection server daemon (89.40.242.20:26649). Dec 12 18:45:13.742170 systemd[1]: Started sshd@5-172.237.139.56:22-139.178.68.195:40284.service - OpenSSH per-connection server daemon (139.178.68.195:40284). Dec 12 18:45:14.110930 sshd[1740]: Accepted publickey for core from 139.178.68.195 port 40284 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:14.113384 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:14.120956 systemd-logind[1549]: New session 2 of user core. Dec 12 18:45:14.278185 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:45:14.389464 systemd[1]: Started sshd@6-172.237.139.56:22-5.210.76.34:5477.service - OpenSSH per-connection server daemon (5.210.76.34:5477). Dec 12 18:45:14.489887 sshd[1743]: Connection closed by 139.178.68.195 port 40284 Dec 12 18:45:14.490768 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:14.497932 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:45:14.499692 systemd[1]: sshd@5-172.237.139.56:22-139.178.68.195:40284.service: Deactivated successfully. Dec 12 18:45:14.503646 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:45:14.507613 systemd-logind[1549]: Removed session 2. Dec 12 18:45:14.561268 systemd[1]: Started sshd@7-172.237.139.56:22-139.178.68.195:40294.service - OpenSSH per-connection server daemon (139.178.68.195:40294). Dec 12 18:45:14.585903 sshd[1736]: Connection reset by 89.40.242.20 port 26649 [preauth] Dec 12 18:45:14.586406 systemd[1]: sshd@4-172.237.139.56:22-89.40.242.20:26649.service: Deactivated successfully. Dec 12 18:45:14.937446 sshd[1753]: Accepted publickey for core from 139.178.68.195 port 40294 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:14.938517 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:14.947299 systemd-logind[1549]: New session 3 of user core. Dec 12 18:45:14.954976 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:45:15.000463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:15.002024 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:45:15.059294 systemd[1]: Startup finished in 4.658s (kernel) + 9.511s (initrd) + 8.898s (userspace) = 23.068s. Dec 12 18:45:15.065507 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:45:15.192039 sshd[1759]: Connection closed by 139.178.68.195 port 40294 Dec 12 18:45:15.192564 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:15.198991 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:45:15.200063 systemd[1]: sshd@7-172.237.139.56:22-139.178.68.195:40294.service: Deactivated successfully. Dec 12 18:45:15.204030 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:45:15.207673 systemd-logind[1549]: Removed session 3. Dec 12 18:45:15.369639 sshd[1745]: Connection closed by 5.210.76.34 port 5477 [preauth] Dec 12 18:45:15.371925 systemd[1]: sshd@6-172.237.139.56:22-5.210.76.34:5477.service: Deactivated successfully. Dec 12 18:45:15.689957 systemd[1]: Started sshd@8-172.237.139.56:22-5.217.105.177:48972.service - OpenSSH per-connection server daemon (5.217.105.177:48972). Dec 12 18:45:15.929253 sshd[1780]: kex_exchange_identification: read: Connection reset by peer Dec 12 18:45:15.929253 sshd[1780]: Connection reset by 5.217.105.177 port 48972 Dec 12 18:45:15.929401 systemd[1]: sshd@8-172.237.139.56:22-5.217.105.177:48972.service: Deactivated successfully. Dec 12 18:45:16.104986 kubelet[1764]: E1212 18:45:16.104372 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:45:16.124880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:45:16.125136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:45:16.125941 systemd[1]: kubelet.service: Consumed 2.655s CPU time, 270.6M memory peak. Dec 12 18:45:16.397983 systemd[1]: Started sshd@9-172.237.139.56:22-5.217.75.71:53660.service - OpenSSH per-connection server daemon (5.217.75.71:53660). Dec 12 18:45:17.778441 sshd[1786]: Connection closed by 5.217.75.71 port 53660 [preauth] Dec 12 18:45:17.780743 systemd[1]: sshd@9-172.237.139.56:22-5.217.75.71:53660.service: Deactivated successfully. Dec 12 18:45:18.869896 systemd[1]: Started sshd@10-172.237.139.56:22-85.185.255.13:39920.service - OpenSSH per-connection server daemon (85.185.255.13:39920). Dec 12 18:45:19.676719 sshd[1792]: Connection closed by 85.185.255.13 port 39920 [preauth] Dec 12 18:45:19.679130 systemd[1]: sshd@10-172.237.139.56:22-85.185.255.13:39920.service: Deactivated successfully. Dec 12 18:45:20.884551 systemd[1]: Started sshd@11-172.237.139.56:22-95.111.209.169:43090.service - OpenSSH per-connection server daemon (95.111.209.169:43090). Dec 12 18:45:20.981614 systemd[1]: Started sshd@12-172.237.139.56:22-109.177.23.90:37312.service - OpenSSH per-connection server daemon (109.177.23.90:37312). Dec 12 18:45:21.450945 sshd[1802]: Connection closed by 109.177.23.90 port 37312 [preauth] Dec 12 18:45:21.453077 systemd[1]: sshd@12-172.237.139.56:22-109.177.23.90:37312.service: Deactivated successfully. Dec 12 18:45:22.571178 sshd[1798]: Connection closed by 95.111.209.169 port 43090 [preauth] Dec 12 18:45:22.574049 systemd[1]: sshd@11-172.237.139.56:22-95.111.209.169:43090.service: Deactivated successfully. Dec 12 18:45:22.803062 systemd[1]: Started sshd@13-172.237.139.56:22-151.238.70.85:33677.service - OpenSSH per-connection server daemon (151.238.70.85:33677). Dec 12 18:45:23.663652 sshd[1810]: Connection closed by 151.238.70.85 port 33677 [preauth] Dec 12 18:45:23.665941 systemd[1]: sshd@13-172.237.139.56:22-151.238.70.85:33677.service: Deactivated successfully. Dec 12 18:45:24.039511 systemd[1]: Started sshd@14-172.237.139.56:22-45.80.228.170:36504.service - OpenSSH per-connection server daemon (45.80.228.170:36504). Dec 12 18:45:24.965346 sshd[1816]: Connection reset by 45.80.228.170 port 36504 [preauth] Dec 12 18:45:24.967645 systemd[1]: sshd@14-172.237.139.56:22-45.80.228.170:36504.service: Deactivated successfully. Dec 12 18:45:25.253858 systemd[1]: Started sshd@15-172.237.139.56:22-139.178.68.195:57314.service - OpenSSH per-connection server daemon (139.178.68.195:57314). Dec 12 18:45:25.286069 systemd[1]: Started sshd@16-172.237.139.56:22-65.18.127.111:5031.service - OpenSSH per-connection server daemon (65.18.127.111:5031). Dec 12 18:45:25.596547 sshd[1822]: Accepted publickey for core from 139.178.68.195 port 57314 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:25.598427 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:25.606574 systemd-logind[1549]: New session 4 of user core. Dec 12 18:45:25.609979 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:45:25.844259 sshd[1829]: Connection closed by 139.178.68.195 port 57314 Dec 12 18:45:25.845038 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:25.849625 systemd[1]: sshd@15-172.237.139.56:22-139.178.68.195:57314.service: Deactivated successfully. Dec 12 18:45:25.851979 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:45:25.853047 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:45:25.854720 systemd-logind[1549]: Removed session 4. Dec 12 18:45:25.911790 systemd[1]: Started sshd@17-172.237.139.56:22-139.178.68.195:57316.service - OpenSSH per-connection server daemon (139.178.68.195:57316). Dec 12 18:45:26.116459 systemd[1]: Started sshd@18-172.237.139.56:22-5.214.225.134:35156.service - OpenSSH per-connection server daemon (5.214.225.134:35156). Dec 12 18:45:26.133829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:45:26.138983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:26.251566 sshd[1826]: Connection closed by 65.18.127.111 port 5031 [preauth] Dec 12 18:45:26.253823 systemd[1]: sshd@16-172.237.139.56:22-65.18.127.111:5031.service: Deactivated successfully. Dec 12 18:45:26.270716 sshd[1835]: Accepted publickey for core from 139.178.68.195 port 57316 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:26.272850 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:26.284143 systemd-logind[1549]: New session 5 of user core. Dec 12 18:45:26.291981 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:45:26.425025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:26.442495 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:45:26.537601 sshd[1846]: Connection closed by 139.178.68.195 port 57316 Dec 12 18:45:26.528879 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:45:26.525113 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:26.529806 systemd[1]: sshd@17-172.237.139.56:22-139.178.68.195:57316.service: Deactivated successfully. Dec 12 18:45:26.533315 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:45:26.536510 systemd-logind[1549]: Removed session 5. Dec 12 18:45:26.699996 kubelet[1852]: E1212 18:45:26.699080 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:45:26.708201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:45:26.708687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:45:26.710113 systemd[1]: kubelet.service: Consumed 510ms CPU time, 110.4M memory peak. Dec 12 18:45:26.755913 systemd[1]: Started sshd@19-172.237.139.56:22-139.178.68.195:57326.service - OpenSSH per-connection server daemon (139.178.68.195:57326). Dec 12 18:45:27.115336 sshd[1864]: Accepted publickey for core from 139.178.68.195 port 57326 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:27.116690 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:27.123003 systemd-logind[1549]: New session 6 of user core. Dec 12 18:45:27.132972 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:45:27.199059 systemd[1]: Started sshd@20-172.237.139.56:22-5.208.146.88:26120.service - OpenSSH per-connection server daemon (5.208.146.88:26120). Dec 12 18:45:27.365286 sshd[1867]: Connection closed by 139.178.68.195 port 57326 Dec 12 18:45:27.366118 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:27.370705 systemd[1]: sshd@19-172.237.139.56:22-139.178.68.195:57326.service: Deactivated successfully. Dec 12 18:45:27.372786 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:45:27.374023 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:45:27.375808 systemd-logind[1549]: Removed session 6. Dec 12 18:45:27.439981 systemd[1]: Started sshd@21-172.237.139.56:22-139.178.68.195:57340.service - OpenSSH per-connection server daemon (139.178.68.195:57340). Dec 12 18:45:27.795943 sshd[1877]: Accepted publickey for core from 139.178.68.195 port 57340 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:27.797790 sshd-session[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:27.803898 systemd-logind[1549]: New session 7 of user core. Dec 12 18:45:27.811002 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:45:28.010687 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:45:28.011077 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:28.033091 sudo[1881]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:28.062021 sshd[1869]: Connection closed by 5.208.146.88 port 26120 [preauth] Dec 12 18:45:28.063914 systemd[1]: sshd@20-172.237.139.56:22-5.208.146.88:26120.service: Deactivated successfully. Dec 12 18:45:28.085683 sshd[1880]: Connection closed by 139.178.68.195 port 57340 Dec 12 18:45:28.086140 sshd-session[1877]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:28.090775 systemd[1]: sshd@21-172.237.139.56:22-139.178.68.195:57340.service: Deactivated successfully. Dec 12 18:45:28.092729 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:45:28.094048 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:45:28.095957 systemd-logind[1549]: Removed session 7. Dec 12 18:45:28.145487 systemd[1]: Started sshd@22-172.237.139.56:22-139.178.68.195:57342.service - OpenSSH per-connection server daemon (139.178.68.195:57342). Dec 12 18:45:28.488090 sshd[1889]: Accepted publickey for core from 139.178.68.195 port 57342 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:28.490651 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:28.503890 systemd-logind[1549]: New session 8 of user core. Dec 12 18:45:28.509968 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:45:28.685046 sudo[1894]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:45:28.685594 sudo[1894]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:28.691569 sudo[1894]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:28.698194 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:45:28.698721 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:28.710541 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:45:28.756392 augenrules[1916]: No rules Dec 12 18:45:28.757628 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:45:28.757975 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:45:28.759668 sudo[1893]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:28.810095 sshd[1892]: Connection closed by 139.178.68.195 port 57342 Dec 12 18:45:28.810745 sshd-session[1889]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:28.815942 systemd[1]: sshd@22-172.237.139.56:22-139.178.68.195:57342.service: Deactivated successfully. Dec 12 18:45:28.818105 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:45:28.819173 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:45:28.821268 systemd-logind[1549]: Removed session 8. Dec 12 18:45:28.876991 systemd[1]: Started sshd@23-172.237.139.56:22-139.178.68.195:57350.service - OpenSSH per-connection server daemon (139.178.68.195:57350). Dec 12 18:45:29.224513 sshd[1925]: Accepted publickey for core from 139.178.68.195 port 57350 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:29.226647 sshd-session[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:29.233788 systemd-logind[1549]: New session 9 of user core. Dec 12 18:45:29.239962 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:45:29.428564 sudo[1929]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:45:29.428955 sudo[1929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:30.541855 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:30.542009 systemd[1]: kubelet.service: Consumed 510ms CPU time, 110.4M memory peak. Dec 12 18:45:30.544792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:30.580761 systemd[1]: Reload requested from client PID 1964 ('systemctl') (unit session-9.scope)... Dec 12 18:45:30.580943 systemd[1]: Reloading... Dec 12 18:45:30.801930 zram_generator::config[2011]: No configuration found. Dec 12 18:45:31.058292 systemd[1]: Reloading finished in 476 ms. Dec 12 18:45:31.133377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:31.141137 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:45:31.142803 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:31.144575 systemd[1]: Started sshd@24-172.237.139.56:22-5.217.91.112:54401.service - OpenSSH per-connection server daemon (5.217.91.112:54401). Dec 12 18:45:31.146647 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:45:31.147034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:31.147198 systemd[1]: kubelet.service: Consumed 358ms CPU time, 98.4M memory peak. Dec 12 18:45:31.152118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:31.336768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:31.348772 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:45:31.411093 kubelet[2072]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:45:31.411093 kubelet[2072]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:45:31.411093 kubelet[2072]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:45:31.411542 kubelet[2072]: I1212 18:45:31.411193 2072 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:45:31.624569 kubelet[2072]: I1212 18:45:31.624442 2072 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:45:31.626815 kubelet[2072]: I1212 18:45:31.624692 2072 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:45:31.626815 kubelet[2072]: I1212 18:45:31.624993 2072 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:45:31.662243 kubelet[2072]: I1212 18:45:31.662184 2072 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:45:31.679953 kubelet[2072]: I1212 18:45:31.679914 2072 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:45:31.687576 kubelet[2072]: I1212 18:45:31.687536 2072 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:45:31.688200 kubelet[2072]: I1212 18:45:31.688149 2072 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:45:31.688505 kubelet[2072]: I1212 18:45:31.688282 2072 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"192.168.177.56","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:45:31.688944 kubelet[2072]: I1212 18:45:31.688924 2072 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:45:31.689052 kubelet[2072]: I1212 18:45:31.689038 2072 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:45:31.690505 kubelet[2072]: I1212 18:45:31.690480 2072 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:45:31.703517 kubelet[2072]: I1212 18:45:31.703466 2072 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:45:31.703729 kubelet[2072]: I1212 18:45:31.703708 2072 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:45:31.703979 kubelet[2072]: I1212 18:45:31.703956 2072 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:45:31.704074 kubelet[2072]: I1212 18:45:31.704063 2072 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:45:31.704190 kubelet[2072]: E1212 18:45:31.704164 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:31.704190 kubelet[2072]: E1212 18:45:31.704067 2072 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:31.712258 kubelet[2072]: I1212 18:45:31.711398 2072 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:45:31.712258 kubelet[2072]: I1212 18:45:31.712220 2072 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:45:31.713443 kubelet[2072]: W1212 18:45:31.713420 2072 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:45:31.718661 kubelet[2072]: I1212 18:45:31.718616 2072 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:45:31.718727 kubelet[2072]: I1212 18:45:31.718717 2072 server.go:1289] "Started kubelet" Dec 12 18:45:31.721127 kubelet[2072]: I1212 18:45:31.721086 2072 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:45:31.738008 kubelet[2072]: I1212 18:45:31.736264 2072 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:45:31.738008 kubelet[2072]: I1212 18:45:31.737810 2072 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:45:31.744013 kubelet[2072]: I1212 18:45:31.743971 2072 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:45:31.744583 kubelet[2072]: E1212 18:45:31.744553 2072 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.177.56\" not found" Dec 12 18:45:31.749094 kubelet[2072]: I1212 18:45:31.745599 2072 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:45:31.749242 kubelet[2072]: I1212 18:45:31.745686 2072 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:45:31.749375 kubelet[2072]: I1212 18:45:31.746538 2072 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:45:31.749963 kubelet[2072]: I1212 18:45:31.747359 2072 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:45:31.849281 kubelet[2072]: E1212 18:45:31.849239 2072 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.177.56\" not found" Dec 12 18:45:31.850059 kubelet[2072]: I1212 18:45:31.850043 2072 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:45:31.851220 kubelet[2072]: E1212 18:45:31.851190 2072 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"192.168.177.56\" not found" node="192.168.177.56" Dec 12 18:45:31.861600 kubelet[2072]: I1212 18:45:31.861542 2072 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:45:31.862572 kubelet[2072]: I1212 18:45:31.862292 2072 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:45:31.869858 kubelet[2072]: E1212 18:45:31.869788 2072 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:45:31.872863 kubelet[2072]: I1212 18:45:31.872747 2072 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:45:31.911712 kubelet[2072]: I1212 18:45:31.911201 2072 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:45:31.911712 kubelet[2072]: I1212 18:45:31.911232 2072 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:45:31.911712 kubelet[2072]: I1212 18:45:31.911273 2072 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:45:31.917909 kubelet[2072]: I1212 18:45:31.917289 2072 policy_none.go:49] "None policy: Start" Dec 12 18:45:31.917909 kubelet[2072]: I1212 18:45:31.917384 2072 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:45:31.917909 kubelet[2072]: I1212 18:45:31.917437 2072 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:45:31.936016 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:45:31.950218 kubelet[2072]: E1212 18:45:31.950127 2072 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.177.56\" not found" Dec 12 18:45:32.050976 kubelet[2072]: E1212 18:45:32.050897 2072 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.177.56\" not found" Dec 12 18:45:32.151812 kubelet[2072]: E1212 18:45:32.151712 2072 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.177.56\" not found" Dec 12 18:45:32.179883 kubelet[2072]: E1212 18:45:32.179678 2072 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "192.168.177.56" not found Dec 12 18:45:32.246728 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:45:32.252513 kubelet[2072]: E1212 18:45:32.252103 2072 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"192.168.177.56\" not found" Dec 12 18:45:32.258471 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:45:32.271478 kubelet[2072]: E1212 18:45:32.271436 2072 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:45:32.272105 kubelet[2072]: I1212 18:45:32.272079 2072 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:45:32.272303 kubelet[2072]: I1212 18:45:32.272234 2072 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:45:32.276347 kubelet[2072]: I1212 18:45:32.276259 2072 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:45:32.280922 kubelet[2072]: E1212 18:45:32.280824 2072 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:45:32.281438 kubelet[2072]: E1212 18:45:32.281244 2072 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"192.168.177.56\" not found" Dec 12 18:45:32.354092 kubelet[2072]: I1212 18:45:32.353994 2072 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:45:32.359935 kubelet[2072]: I1212 18:45:32.359478 2072 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:45:32.359935 kubelet[2072]: I1212 18:45:32.359553 2072 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:45:32.359935 kubelet[2072]: I1212 18:45:32.359599 2072 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:45:32.359935 kubelet[2072]: I1212 18:45:32.359626 2072 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:45:32.360265 kubelet[2072]: E1212 18:45:32.359824 2072 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 12 18:45:32.374329 kubelet[2072]: I1212 18:45:32.374191 2072 kubelet_node_status.go:75] "Attempting to register node" node="192.168.177.56" Dec 12 18:45:32.389043 kubelet[2072]: I1212 18:45:32.386681 2072 kubelet_node_status.go:78] "Successfully registered node" node="192.168.177.56" Dec 12 18:45:32.398711 sshd[2060]: Connection closed by 5.217.91.112 port 54401 [preauth] Dec 12 18:45:32.401006 systemd[1]: sshd@24-172.237.139.56:22-5.217.91.112:54401.service: Deactivated successfully. Dec 12 18:45:32.514920 kubelet[2072]: I1212 18:45:32.514756 2072 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 12 18:45:32.516991 kubelet[2072]: I1212 18:45:32.516781 2072 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 12 18:45:32.517080 containerd[1574]: time="2025-12-12T18:45:32.516507536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:45:32.627146 kubelet[2072]: I1212 18:45:32.627062 2072 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 18:45:32.628077 kubelet[2072]: I1212 18:45:32.628044 2072 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Dec 12 18:45:32.628185 kubelet[2072]: I1212 18:45:32.628116 2072 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Dec 12 18:45:32.628365 kubelet[2072]: I1212 18:45:32.628295 2072 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Dec 12 18:45:32.705165 kubelet[2072]: I1212 18:45:32.705071 2072 apiserver.go:52] "Watching apiserver" Dec 12 18:45:32.705165 kubelet[2072]: E1212 18:45:32.705107 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:32.731301 kubelet[2072]: E1212 18:45:32.731181 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:45:32.752633 kubelet[2072]: I1212 18:45:32.752499 2072 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:45:32.757046 kubelet[2072]: I1212 18:45:32.756978 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-policysync\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.758880 kubelet[2072]: I1212 18:45:32.757220 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9857815-a28a-40ea-b6e9-03c5c14b7123-tigera-ca-bundle\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.759116 kubelet[2072]: I1212 18:45:32.759049 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9d771747-d366-4e3a-b362-45818ffae2f6-varrun\") pod \"csi-node-driver-794fx\" (UID: \"9d771747-d366-4e3a-b362-45818ffae2f6\") " pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:32.759308 kubelet[2072]: I1212 18:45:32.759237 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23884c3a-1ddd-4f97-a8b8-cb428209b6dd-lib-modules\") pod \"kube-proxy-gfmv5\" (UID: \"23884c3a-1ddd-4f97-a8b8-cb428209b6dd\") " pod="kube-system/kube-proxy-gfmv5" Dec 12 18:45:32.759438 kubelet[2072]: I1212 18:45:32.759412 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-cni-log-dir\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.759710 kubelet[2072]: I1212 18:45:32.759499 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-cni-net-dir\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.759710 kubelet[2072]: I1212 18:45:32.759534 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-var-lib-calico\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.760355 kubelet[2072]: I1212 18:45:32.759562 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-xtables-lock\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.760454 kubelet[2072]: I1212 18:45:32.760319 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdw9l\" (UniqueName: \"kubernetes.io/projected/b9857815-a28a-40ea-b6e9-03c5c14b7123-kube-api-access-wdw9l\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.760648 kubelet[2072]: I1212 18:45:32.760574 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d771747-d366-4e3a-b362-45818ffae2f6-kubelet-dir\") pod \"csi-node-driver-794fx\" (UID: \"9d771747-d366-4e3a-b362-45818ffae2f6\") " pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:32.760849 kubelet[2072]: I1212 18:45:32.760786 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9d771747-d366-4e3a-b362-45818ffae2f6-socket-dir\") pod \"csi-node-driver-794fx\" (UID: \"9d771747-d366-4e3a-b362-45818ffae2f6\") " pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:32.761054 kubelet[2072]: I1212 18:45:32.761000 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-flexvol-driver-host\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.763723 sudo[1929]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:32.774486 kubelet[2072]: I1212 18:45:32.772707 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-lib-modules\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.774486 kubelet[2072]: I1212 18:45:32.772760 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-var-run-calico\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.774486 kubelet[2072]: I1212 18:45:32.772800 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9d771747-d366-4e3a-b362-45818ffae2f6-registration-dir\") pod \"csi-node-driver-794fx\" (UID: \"9d771747-d366-4e3a-b362-45818ffae2f6\") " pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:32.774486 kubelet[2072]: I1212 18:45:32.772859 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q89j\" (UniqueName: \"kubernetes.io/projected/9d771747-d366-4e3a-b362-45818ffae2f6-kube-api-access-6q89j\") pod \"csi-node-driver-794fx\" (UID: \"9d771747-d366-4e3a-b362-45818ffae2f6\") " pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:32.774486 kubelet[2072]: I1212 18:45:32.772899 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23884c3a-1ddd-4f97-a8b8-cb428209b6dd-xtables-lock\") pod \"kube-proxy-gfmv5\" (UID: \"23884c3a-1ddd-4f97-a8b8-cb428209b6dd\") " pod="kube-system/kube-proxy-gfmv5" Dec 12 18:45:32.775746 kubelet[2072]: I1212 18:45:32.772937 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xswl\" (UniqueName: \"kubernetes.io/projected/23884c3a-1ddd-4f97-a8b8-cb428209b6dd-kube-api-access-2xswl\") pod \"kube-proxy-gfmv5\" (UID: \"23884c3a-1ddd-4f97-a8b8-cb428209b6dd\") " pod="kube-system/kube-proxy-gfmv5" Dec 12 18:45:32.775746 kubelet[2072]: I1212 18:45:32.772968 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b9857815-a28a-40ea-b6e9-03c5c14b7123-cni-bin-dir\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.775746 kubelet[2072]: I1212 18:45:32.773012 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23884c3a-1ddd-4f97-a8b8-cb428209b6dd-kube-proxy\") pod \"kube-proxy-gfmv5\" (UID: \"23884c3a-1ddd-4f97-a8b8-cb428209b6dd\") " pod="kube-system/kube-proxy-gfmv5" Dec 12 18:45:32.775746 kubelet[2072]: I1212 18:45:32.773047 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b9857815-a28a-40ea-b6e9-03c5c14b7123-node-certs\") pod \"calico-node-9fdp9\" (UID: \"b9857815-a28a-40ea-b6e9-03c5c14b7123\") " pod="calico-system/calico-node-9fdp9" Dec 12 18:45:32.791983 systemd[1]: Created slice kubepods-besteffort-podb9857815_a28a_40ea_b6e9_03c5c14b7123.slice - libcontainer container kubepods-besteffort-podb9857815_a28a_40ea_b6e9_03c5c14b7123.slice. Dec 12 18:45:32.819205 sshd[1928]: Connection closed by 139.178.68.195 port 57350 Dec 12 18:45:32.820798 sshd-session[1925]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:32.829903 systemd[1]: Created slice kubepods-besteffort-pod23884c3a_1ddd_4f97_a8b8_cb428209b6dd.slice - libcontainer container kubepods-besteffort-pod23884c3a_1ddd_4f97_a8b8_cb428209b6dd.slice. Dec 12 18:45:32.833644 systemd[1]: sshd@23-172.237.139.56:22-139.178.68.195:57350.service: Deactivated successfully. Dec 12 18:45:32.838413 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:45:32.839037 systemd[1]: session-9.scope: Consumed 1.036s CPU time, 77.2M memory peak. Dec 12 18:45:32.844097 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:45:32.847340 systemd-logind[1549]: Removed session 9. Dec 12 18:45:32.881538 kubelet[2072]: E1212 18:45:32.881467 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:32.881769 kubelet[2072]: W1212 18:45:32.881741 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:32.882780 kubelet[2072]: E1212 18:45:32.882751 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:32.896655 kubelet[2072]: E1212 18:45:32.896625 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:32.897005 kubelet[2072]: W1212 18:45:32.896984 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:32.897146 kubelet[2072]: E1212 18:45:32.897126 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:32.913600 kubelet[2072]: E1212 18:45:32.913567 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:32.914436 kubelet[2072]: W1212 18:45:32.914408 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:32.914829 kubelet[2072]: E1212 18:45:32.914644 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:32.916255 kubelet[2072]: E1212 18:45:32.916169 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:32.916255 kubelet[2072]: W1212 18:45:32.916193 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:32.916692 kubelet[2072]: E1212 18:45:32.916607 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:32.926189 kubelet[2072]: E1212 18:45:32.926139 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:32.926621 kubelet[2072]: W1212 18:45:32.926368 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:32.926621 kubelet[2072]: E1212 18:45:32.926410 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:33.118704 kubelet[2072]: E1212 18:45:33.118491 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:33.119817 containerd[1574]: time="2025-12-12T18:45:33.119745726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9fdp9,Uid:b9857815-a28a-40ea-b6e9-03c5c14b7123,Namespace:calico-system,Attempt:0,}" Dec 12 18:45:33.148877 kubelet[2072]: E1212 18:45:33.148808 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:33.150685 containerd[1574]: time="2025-12-12T18:45:33.150554086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gfmv5,Uid:23884c3a-1ddd-4f97-a8b8-cb428209b6dd,Namespace:kube-system,Attempt:0,}" Dec 12 18:45:33.157908 systemd[1]: Started sshd@25-172.237.139.56:22-89.42.101.74:43162.service - OpenSSH per-connection server daemon (89.42.101.74:43162). Dec 12 18:45:33.705528 kubelet[2072]: E1212 18:45:33.705477 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:33.873480 containerd[1574]: time="2025-12-12T18:45:33.873434876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:33.874646 containerd[1574]: time="2025-12-12T18:45:33.874580376Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:33.875273 containerd[1574]: time="2025-12-12T18:45:33.875221746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:45:33.875730 containerd[1574]: time="2025-12-12T18:45:33.875701836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:45:33.876260 containerd[1574]: time="2025-12-12T18:45:33.876207076Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:33.878877 containerd[1574]: time="2025-12-12T18:45:33.878830066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:33.879629 containerd[1574]: time="2025-12-12T18:45:33.879598936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 726.66605ms" Dec 12 18:45:33.880738 containerd[1574]: time="2025-12-12T18:45:33.880703986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 721.40575ms" Dec 12 18:45:33.888572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181712898.mount: Deactivated successfully. Dec 12 18:45:33.928692 containerd[1574]: time="2025-12-12T18:45:33.925132036Z" level=info msg="connecting to shim 4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913" address="unix:///run/containerd/s/9352a56094d05c5b301b89ba90ad2ebebf23b5d08fd4d7ce00bf763a799c145b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:45:33.944749 containerd[1574]: time="2025-12-12T18:45:33.940012056Z" level=info msg="connecting to shim f539e5cf6078b1c386f32be8aba34223d9666d553d90374f5c5d9995b7a6496a" address="unix:///run/containerd/s/c2c2dbec01c64adfa927d596afafb2c50dc4d3b1b2877339083a8203b49db070" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:45:34.045155 systemd[1]: Started cri-containerd-f539e5cf6078b1c386f32be8aba34223d9666d553d90374f5c5d9995b7a6496a.scope - libcontainer container f539e5cf6078b1c386f32be8aba34223d9666d553d90374f5c5d9995b7a6496a. Dec 12 18:45:34.056261 systemd[1]: Started cri-containerd-4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913.scope - libcontainer container 4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913. Dec 12 18:45:34.110444 containerd[1574]: time="2025-12-12T18:45:34.110231286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gfmv5,Uid:23884c3a-1ddd-4f97-a8b8-cb428209b6dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f539e5cf6078b1c386f32be8aba34223d9666d553d90374f5c5d9995b7a6496a\"" Dec 12 18:45:34.112031 kubelet[2072]: E1212 18:45:34.111829 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:34.113792 containerd[1574]: time="2025-12-12T18:45:34.113543456Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 18:45:34.127506 containerd[1574]: time="2025-12-12T18:45:34.127459166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9fdp9,Uid:b9857815-a28a-40ea-b6e9-03c5c14b7123,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913\"" Dec 12 18:45:34.128691 kubelet[2072]: E1212 18:45:34.128664 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:34.361069 kubelet[2072]: E1212 18:45:34.360362 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:45:34.706778 kubelet[2072]: E1212 18:45:34.706740 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:35.629144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494247570.mount: Deactivated successfully. Dec 12 18:45:35.708376 kubelet[2072]: E1212 18:45:35.708125 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:36.372709 kubelet[2072]: E1212 18:45:36.368689 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:45:36.568053 systemd[1]: Started sshd@26-172.237.139.56:22-5.212.153.251:29379.service - OpenSSH per-connection server daemon (5.212.153.251:29379). Dec 12 18:45:36.693530 containerd[1574]: time="2025-12-12T18:45:36.693418436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:36.694586 containerd[1574]: time="2025-12-12T18:45:36.694544966Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 12 18:45:36.695541 containerd[1574]: time="2025-12-12T18:45:36.695496826Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:36.698095 containerd[1574]: time="2025-12-12T18:45:36.698046416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:36.698774 containerd[1574]: time="2025-12-12T18:45:36.698730786Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.5851476s" Dec 12 18:45:36.698900 containerd[1574]: time="2025-12-12T18:45:36.698883536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 18:45:36.701896 containerd[1574]: time="2025-12-12T18:45:36.701872466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:45:36.707101 containerd[1574]: time="2025-12-12T18:45:36.707063606Z" level=info msg="CreateContainer within sandbox \"f539e5cf6078b1c386f32be8aba34223d9666d553d90374f5c5d9995b7a6496a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:45:36.709375 kubelet[2072]: E1212 18:45:36.709311 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:36.720608 containerd[1574]: time="2025-12-12T18:45:36.719451066Z" level=info msg="Container 956748f8cf91b37fe12bc28e41818c236b7f5ab59eff56fd7fde69778096dbeb: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:36.737671 containerd[1574]: time="2025-12-12T18:45:36.737637336Z" level=info msg="CreateContainer within sandbox \"f539e5cf6078b1c386f32be8aba34223d9666d553d90374f5c5d9995b7a6496a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"956748f8cf91b37fe12bc28e41818c236b7f5ab59eff56fd7fde69778096dbeb\"" Dec 12 18:45:36.738250 containerd[1574]: time="2025-12-12T18:45:36.738197526Z" level=info msg="StartContainer for \"956748f8cf91b37fe12bc28e41818c236b7f5ab59eff56fd7fde69778096dbeb\"" Dec 12 18:45:36.767056 containerd[1574]: time="2025-12-12T18:45:36.742918236Z" level=info msg="connecting to shim 956748f8cf91b37fe12bc28e41818c236b7f5ab59eff56fd7fde69778096dbeb" address="unix:///run/containerd/s/c2c2dbec01c64adfa927d596afafb2c50dc4d3b1b2877339083a8203b49db070" protocol=ttrpc version=3 Dec 12 18:45:36.820982 systemd[1]: Started cri-containerd-956748f8cf91b37fe12bc28e41818c236b7f5ab59eff56fd7fde69778096dbeb.scope - libcontainer container 956748f8cf91b37fe12bc28e41818c236b7f5ab59eff56fd7fde69778096dbeb. Dec 12 18:45:36.931203 containerd[1574]: time="2025-12-12T18:45:36.931162286Z" level=info msg="StartContainer for \"956748f8cf91b37fe12bc28e41818c236b7f5ab59eff56fd7fde69778096dbeb\" returns successfully" Dec 12 18:45:37.319092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373608485.mount: Deactivated successfully. Dec 12 18:45:37.384492 kubelet[2072]: E1212 18:45:37.384445 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:37.395109 kubelet[2072]: E1212 18:45:37.395062 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.395208 kubelet[2072]: W1212 18:45:37.395113 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.395208 kubelet[2072]: E1212 18:45:37.395173 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.395443 kubelet[2072]: E1212 18:45:37.395421 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.395443 kubelet[2072]: W1212 18:45:37.395435 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.395443 kubelet[2072]: E1212 18:45:37.395445 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.396091 kubelet[2072]: E1212 18:45:37.396007 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.396091 kubelet[2072]: W1212 18:45:37.396027 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.396091 kubelet[2072]: E1212 18:45:37.396043 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.396955 kubelet[2072]: E1212 18:45:37.396926 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.396955 kubelet[2072]: W1212 18:45:37.396942 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.396955 kubelet[2072]: E1212 18:45:37.396953 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.398124 kubelet[2072]: E1212 18:45:37.398085 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.398124 kubelet[2072]: W1212 18:45:37.398110 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.398124 kubelet[2072]: E1212 18:45:37.398126 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.398699 kubelet[2072]: E1212 18:45:37.398678 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.398699 kubelet[2072]: W1212 18:45:37.398697 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.398780 kubelet[2072]: E1212 18:45:37.398708 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.399422 kubelet[2072]: E1212 18:45:37.399393 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.399719 kubelet[2072]: W1212 18:45:37.399668 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.399719 kubelet[2072]: E1212 18:45:37.399690 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.400119 kubelet[2072]: E1212 18:45:37.400098 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.400119 kubelet[2072]: W1212 18:45:37.400116 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.400235 kubelet[2072]: E1212 18:45:37.400126 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.401257 kubelet[2072]: E1212 18:45:37.401237 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.401257 kubelet[2072]: W1212 18:45:37.401255 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.401418 kubelet[2072]: E1212 18:45:37.401266 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.401693 kubelet[2072]: I1212 18:45:37.401584 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gfmv5" podStartSLOduration=2.813788986 podStartE2EDuration="5.401539926s" podCreationTimestamp="2025-12-12 18:45:32 +0000 UTC" firstStartedPulling="2025-12-12 18:45:34.113296586 +0000 UTC m=+2.749591061" lastFinishedPulling="2025-12-12 18:45:36.701047536 +0000 UTC m=+5.337342001" observedRunningTime="2025-12-12 18:45:37.399320746 +0000 UTC m=+6.035615211" watchObservedRunningTime="2025-12-12 18:45:37.401539926 +0000 UTC m=+6.037834391" Dec 12 18:45:37.404280 kubelet[2072]: E1212 18:45:37.404207 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.404280 kubelet[2072]: W1212 18:45:37.404280 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.404346 kubelet[2072]: E1212 18:45:37.404293 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.406149 kubelet[2072]: E1212 18:45:37.406101 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.406496 kubelet[2072]: W1212 18:45:37.406310 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.406496 kubelet[2072]: E1212 18:45:37.406342 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.408999 kubelet[2072]: E1212 18:45:37.408935 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.408999 kubelet[2072]: W1212 18:45:37.408959 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.408999 kubelet[2072]: E1212 18:45:37.408972 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.409272 kubelet[2072]: E1212 18:45:37.409223 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.409272 kubelet[2072]: W1212 18:45:37.409239 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.409272 kubelet[2072]: E1212 18:45:37.409253 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.409462 kubelet[2072]: E1212 18:45:37.409433 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.409462 kubelet[2072]: W1212 18:45:37.409446 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.409462 kubelet[2072]: E1212 18:45:37.409458 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.409655 kubelet[2072]: E1212 18:45:37.409636 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.409655 kubelet[2072]: W1212 18:45:37.409651 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.409801 kubelet[2072]: E1212 18:45:37.409662 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.409879 kubelet[2072]: E1212 18:45:37.409855 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.409879 kubelet[2072]: W1212 18:45:37.409869 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.409879 kubelet[2072]: E1212 18:45:37.409878 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.410855 kubelet[2072]: E1212 18:45:37.410669 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.410855 kubelet[2072]: W1212 18:45:37.410696 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.410855 kubelet[2072]: E1212 18:45:37.410707 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.410985 kubelet[2072]: E1212 18:45:37.410954 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.410985 kubelet[2072]: W1212 18:45:37.410964 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.410985 kubelet[2072]: E1212 18:45:37.410974 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.411170 kubelet[2072]: E1212 18:45:37.411151 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.411170 kubelet[2072]: W1212 18:45:37.411166 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.411366 kubelet[2072]: E1212 18:45:37.411175 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.411784 kubelet[2072]: E1212 18:45:37.411748 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.411784 kubelet[2072]: W1212 18:45:37.411767 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.411784 kubelet[2072]: E1212 18:45:37.411777 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.413023 kubelet[2072]: E1212 18:45:37.412982 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.413023 kubelet[2072]: W1212 18:45:37.412998 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.413125 kubelet[2072]: E1212 18:45:37.413009 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.413598 kubelet[2072]: E1212 18:45:37.413494 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.413598 kubelet[2072]: W1212 18:45:37.413515 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.413598 kubelet[2072]: E1212 18:45:37.413537 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.414272 kubelet[2072]: E1212 18:45:37.414230 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.414272 kubelet[2072]: W1212 18:45:37.414249 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.414272 kubelet[2072]: E1212 18:45:37.414259 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.414606 kubelet[2072]: E1212 18:45:37.414587 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.414606 kubelet[2072]: W1212 18:45:37.414606 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.414678 kubelet[2072]: E1212 18:45:37.414616 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.414941 kubelet[2072]: E1212 18:45:37.414909 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.414941 kubelet[2072]: W1212 18:45:37.414926 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.414941 kubelet[2072]: E1212 18:45:37.414935 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.417914 kubelet[2072]: E1212 18:45:37.417876 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.418183 kubelet[2072]: W1212 18:45:37.418093 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.418183 kubelet[2072]: E1212 18:45:37.418120 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.420749 containerd[1574]: time="2025-12-12T18:45:37.420618226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:37.421503 containerd[1574]: time="2025-12-12T18:45:37.421474546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Dec 12 18:45:37.423219 kubelet[2072]: E1212 18:45:37.423173 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.423219 kubelet[2072]: W1212 18:45:37.423196 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.423219 kubelet[2072]: E1212 18:45:37.423211 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.423817 containerd[1574]: time="2025-12-12T18:45:37.423792776Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:37.425902 containerd[1574]: time="2025-12-12T18:45:37.425826876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:37.427453 containerd[1574]: time="2025-12-12T18:45:37.427417986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 725.43362ms" Dec 12 18:45:37.427562 containerd[1574]: time="2025-12-12T18:45:37.427543186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:45:37.430703 kubelet[2072]: E1212 18:45:37.430457 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.430703 kubelet[2072]: W1212 18:45:37.430476 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.430703 kubelet[2072]: E1212 18:45:37.430487 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.431011 kubelet[2072]: E1212 18:45:37.430980 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.431011 kubelet[2072]: W1212 18:45:37.430998 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.431011 kubelet[2072]: E1212 18:45:37.431009 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.432198 kubelet[2072]: E1212 18:45:37.432174 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.432198 kubelet[2072]: W1212 18:45:37.432192 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.432279 kubelet[2072]: E1212 18:45:37.432203 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.433039 kubelet[2072]: E1212 18:45:37.433012 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.433039 kubelet[2072]: W1212 18:45:37.433031 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.433039 kubelet[2072]: E1212 18:45:37.433042 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.434130 kubelet[2072]: E1212 18:45:37.434099 2072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:45:37.434130 kubelet[2072]: W1212 18:45:37.434120 2072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:45:37.434194 kubelet[2072]: E1212 18:45:37.434135 2072 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:45:37.436669 containerd[1574]: time="2025-12-12T18:45:37.436615936Z" level=info msg="CreateContainer within sandbox \"4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:45:37.451982 containerd[1574]: time="2025-12-12T18:45:37.451946376Z" level=info msg="Container b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:37.458019 containerd[1574]: time="2025-12-12T18:45:37.457983796Z" level=info msg="CreateContainer within sandbox \"4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681\"" Dec 12 18:45:37.460113 containerd[1574]: time="2025-12-12T18:45:37.460078926Z" level=info msg="StartContainer for \"b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681\"" Dec 12 18:45:37.463163 containerd[1574]: time="2025-12-12T18:45:37.463133156Z" level=info msg="connecting to shim b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681" address="unix:///run/containerd/s/9352a56094d05c5b301b89ba90ad2ebebf23b5d08fd4d7ce00bf763a799c145b" protocol=ttrpc version=3 Dec 12 18:45:37.509003 systemd[1]: Started cri-containerd-b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681.scope - libcontainer container b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681. Dec 12 18:45:37.626947 containerd[1574]: time="2025-12-12T18:45:37.624941186Z" level=info msg="StartContainer for \"b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681\" returns successfully" Dec 12 18:45:37.656771 systemd[1]: cri-containerd-b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681.scope: Deactivated successfully. Dec 12 18:45:37.661261 sshd[2238]: Connection closed by 5.212.153.251 port 29379 [preauth] Dec 12 18:45:37.661750 containerd[1574]: time="2025-12-12T18:45:37.661348676Z" level=info msg="received container exit event container_id:\"b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681\" id:\"b4f267eab98323e63ac57f92d912407d974d73eb22bf427165f0ba999549a681\" pid:2397 exited_at:{seconds:1765565137 nanos:659664306}" Dec 12 18:45:37.667753 systemd[1]: sshd@26-172.237.139.56:22-5.212.153.251:29379.service: Deactivated successfully. Dec 12 18:45:37.710531 kubelet[2072]: E1212 18:45:37.710490 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:38.360690 kubelet[2072]: E1212 18:45:38.360147 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:45:38.387474 kubelet[2072]: E1212 18:45:38.387125 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:38.387474 kubelet[2072]: E1212 18:45:38.387191 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:38.388058 containerd[1574]: time="2025-12-12T18:45:38.388023736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:45:38.407713 systemd[1]: Started sshd@27-172.237.139.56:22-5.74.201.180:47402.service - OpenSSH per-connection server daemon (5.74.201.180:47402). Dec 12 18:45:38.711363 kubelet[2072]: E1212 18:45:38.711329 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:38.893374 systemd[1]: Started sshd@28-172.237.139.56:22-107.182.236.53:12502.service - OpenSSH per-connection server daemon (107.182.236.53:12502). Dec 12 18:45:39.712028 kubelet[2072]: E1212 18:45:39.711855 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:39.743189 sshd[2500]: Connection closed by 107.182.236.53 port 12502 [preauth] Dec 12 18:45:39.745723 systemd[1]: sshd@28-172.237.139.56:22-107.182.236.53:12502.service: Deactivated successfully. Dec 12 18:45:40.365859 kubelet[2072]: E1212 18:45:40.365289 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:45:40.714137 kubelet[2072]: E1212 18:45:40.713556 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:40.713804 systemd[1]: Started sshd@29-172.237.139.56:22-5.238.145.19:51420.service - OpenSSH per-connection server daemon (5.238.145.19:51420). Dec 12 18:45:41.464892 sshd[2510]: Connection closed by 5.238.145.19 port 51420 [preauth] Dec 12 18:45:41.469385 systemd[1]: sshd@29-172.237.139.56:22-5.238.145.19:51420.service: Deactivated successfully. Dec 12 18:45:41.570872 containerd[1574]: time="2025-12-12T18:45:41.569541366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:41.571489 containerd[1574]: time="2025-12-12T18:45:41.570981116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:45:41.572147 containerd[1574]: time="2025-12-12T18:45:41.572110636Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:41.575440 containerd[1574]: time="2025-12-12T18:45:41.575410436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:41.576264 containerd[1574]: time="2025-12-12T18:45:41.576225766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.18815586s" Dec 12 18:45:41.576325 containerd[1574]: time="2025-12-12T18:45:41.576292036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:45:41.584806 containerd[1574]: time="2025-12-12T18:45:41.584767906Z" level=info msg="CreateContainer within sandbox \"4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:45:41.610863 containerd[1574]: time="2025-12-12T18:45:41.607671506Z" level=info msg="Container b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:41.620805 containerd[1574]: time="2025-12-12T18:45:41.620769166Z" level=info msg="CreateContainer within sandbox \"4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495\"" Dec 12 18:45:41.621938 containerd[1574]: time="2025-12-12T18:45:41.621898106Z" level=info msg="StartContainer for \"b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495\"" Dec 12 18:45:41.624003 containerd[1574]: time="2025-12-12T18:45:41.623943006Z" level=info msg="connecting to shim b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495" address="unix:///run/containerd/s/9352a56094d05c5b301b89ba90ad2ebebf23b5d08fd4d7ce00bf763a799c145b" protocol=ttrpc version=3 Dec 12 18:45:41.697019 systemd[1]: Started cri-containerd-b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495.scope - libcontainer container b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495. Dec 12 18:45:41.713912 kubelet[2072]: E1212 18:45:41.713864 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:41.867824 containerd[1574]: time="2025-12-12T18:45:41.867458296Z" level=info msg="StartContainer for \"b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495\" returns successfully" Dec 12 18:45:42.369088 kubelet[2072]: E1212 18:45:42.368195 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:45:42.414961 kubelet[2072]: E1212 18:45:42.414934 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:42.463070 systemd[1]: Started sshd@30-172.237.139.56:22-178.131.162.193:26141.service - OpenSSH per-connection server daemon (178.131.162.193:26141). Dec 12 18:45:42.521118 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:45:42.714979 kubelet[2072]: E1212 18:45:42.714929 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:42.903252 sshd[2492]: Connection closed by 5.74.201.180 port 47402 [preauth] Dec 12 18:45:42.907043 systemd[1]: sshd@27-172.237.139.56:22-5.74.201.180:47402.service: Deactivated successfully. Dec 12 18:45:43.489974 kubelet[2072]: E1212 18:45:43.489154 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:43.715832 kubelet[2072]: E1212 18:45:43.715762 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:43.927120 containerd[1574]: time="2025-12-12T18:45:43.927064206Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:45:43.933888 systemd[1]: cri-containerd-b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495.scope: Deactivated successfully. Dec 12 18:45:43.934852 systemd[1]: cri-containerd-b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495.scope: Consumed 2.233s CPU time, 192.6M memory peak, 171.3M written to disk. Dec 12 18:45:43.940512 containerd[1574]: time="2025-12-12T18:45:43.940463536Z" level=info msg="received container exit event container_id:\"b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495\" id:\"b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495\" pid:2532 exited_at:{seconds:1765565143 nanos:940162246}" Dec 12 18:45:43.971810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6b8f1a79309c1a5c5ac95cfa8cfb6a3c3c7596177983f41d6c1d9e079a71495-rootfs.mount: Deactivated successfully. Dec 12 18:45:44.000544 kubelet[2072]: I1212 18:45:44.000508 2072 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:45:44.096030 sshd[2550]: Connection closed by 178.131.162.193 port 26141 [preauth] Dec 12 18:45:44.099775 systemd[1]: sshd@30-172.237.139.56:22-178.131.162.193:26141.service: Deactivated successfully. Dec 12 18:45:44.368098 systemd[1]: Created slice kubepods-besteffort-pod9d771747_d366_4e3a_b362_45818ffae2f6.slice - libcontainer container kubepods-besteffort-pod9d771747_d366_4e3a_b362_45818ffae2f6.slice. Dec 12 18:45:44.372372 containerd[1574]: time="2025-12-12T18:45:44.372327023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-794fx,Uid:9d771747-d366-4e3a-b362-45818ffae2f6,Namespace:calico-system,Attempt:0,}" Dec 12 18:45:44.443698 containerd[1574]: time="2025-12-12T18:45:44.443641576Z" level=error msg="Failed to destroy network for sandbox \"87883d181b4528b0c8de046a63117132d1c8c966edc6c492acaf61cd6f6871da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:44.445975 systemd[1]: run-netns-cni\x2d00e5ef9d\x2d85a7\x2ddf1a\x2dd151\x2d2b3b6c3e1484.mount: Deactivated successfully. Dec 12 18:45:44.447979 containerd[1574]: time="2025-12-12T18:45:44.447917352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-794fx,Uid:9d771747-d366-4e3a-b362-45818ffae2f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87883d181b4528b0c8de046a63117132d1c8c966edc6c492acaf61cd6f6871da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:44.448643 kubelet[2072]: E1212 18:45:44.448551 2072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87883d181b4528b0c8de046a63117132d1c8c966edc6c492acaf61cd6f6871da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:44.448747 kubelet[2072]: E1212 18:45:44.448693 2072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87883d181b4528b0c8de046a63117132d1c8c966edc6c492acaf61cd6f6871da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:44.448747 kubelet[2072]: E1212 18:45:44.448733 2072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87883d181b4528b0c8de046a63117132d1c8c966edc6c492acaf61cd6f6871da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:44.448871 kubelet[2072]: E1212 18:45:44.448814 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-794fx_calico-system(9d771747-d366-4e3a-b362-45818ffae2f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-794fx_calico-system(9d771747-d366-4e3a-b362-45818ffae2f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87883d181b4528b0c8de046a63117132d1c8c966edc6c492acaf61cd6f6871da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:45:44.493230 kubelet[2072]: E1212 18:45:44.493188 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:45:44.494962 containerd[1574]: time="2025-12-12T18:45:44.494932843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:45:44.716738 kubelet[2072]: E1212 18:45:44.716523 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:45.466294 containerd[1574]: time="2025-12-12T18:45:45.466107089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=51422338" Dec 12 18:45:45.466294 containerd[1574]: time="2025-12-12T18:45:45.466143149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.4\": failed to copy: read tcp [2600:3c06::2000:faff:fe68:e5b2]:56812->[2606:50c0:8002::154]:443: read: connection reset by peer" Dec 12 18:45:45.467362 kubelet[2072]: E1212 18:45:45.466771 2072 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.4\": failed to copy: read tcp [2600:3c06::2000:faff:fe68:e5b2]:56812->[2606:50c0:8002::154]:443: read: connection reset by peer" image="ghcr.io/flatcar/calico/node:v3.30.4" Dec 12 18:45:45.467362 kubelet[2072]: E1212 18:45:45.466986 2072 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.4\": failed to copy: read tcp [2600:3c06::2000:faff:fe68:e5b2]:56812->[2606:50c0:8002::154]:443: read: connection reset by peer" image="ghcr.io/flatcar/calico/node:v3.30.4" Dec 12 18:45:45.467999 kubelet[2072]: E1212 18:45:45.467888 2072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdw9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-9fdp9_calico-system(b9857815-a28a-40ea-b6e9-03c5c14b7123): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.4\": failed to copy: read tcp [2600:3c06::2000:faff:fe68:e5b2]:56812->[2606:50c0:8002::154]:443: read: connection reset by peer" logger="UnhandledError" Dec 12 18:45:45.469364 kubelet[2072]: E1212 18:45:45.469296 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.4\\\": failed to copy: read tcp [2600:3c06::2000:faff:fe68:e5b2]:56812->[2606:50c0:8002::154]:443: read: connection reset by peer\"" pod="calico-system/calico-node-9fdp9" podUID="b9857815-a28a-40ea-b6e9-03c5c14b7123" Dec 12 18:45:45.717989 kubelet[2072]: E1212 18:45:45.717746 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:45.872943 systemd[1]: Created slice kubepods-besteffort-pod9e091f2a_4a6b_45f5_b2aa_caf7f82498f7.slice - libcontainer container kubepods-besteffort-pod9e091f2a_4a6b_45f5_b2aa_caf7f82498f7.slice. Dec 12 18:45:45.899196 kubelet[2072]: I1212 18:45:45.899148 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84sq\" (UniqueName: \"kubernetes.io/projected/9e091f2a-4a6b-45f5-b2aa-caf7f82498f7-kube-api-access-m84sq\") pod \"nginx-deployment-7fcdb87857-2blbf\" (UID: \"9e091f2a-4a6b-45f5-b2aa-caf7f82498f7\") " pod="default/nginx-deployment-7fcdb87857-2blbf" Dec 12 18:45:46.177338 containerd[1574]: time="2025-12-12T18:45:46.177293300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2blbf,Uid:9e091f2a-4a6b-45f5-b2aa-caf7f82498f7,Namespace:default,Attempt:0,}" Dec 12 18:45:46.282256 containerd[1574]: time="2025-12-12T18:45:46.282169041Z" level=error msg="Failed to destroy network for sandbox \"e0ce150326a37094e4a2a21d6c618225e30f176825437f6d74ef79cd18e2f0f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:46.284574 systemd[1]: run-netns-cni\x2dd35e05a5\x2da693\x2d8535\x2d4604\x2d685a4f52cb75.mount: Deactivated successfully. Dec 12 18:45:46.285119 containerd[1574]: time="2025-12-12T18:45:46.285014564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2blbf,Uid:9e091f2a-4a6b-45f5-b2aa-caf7f82498f7,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0ce150326a37094e4a2a21d6c618225e30f176825437f6d74ef79cd18e2f0f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:46.286330 kubelet[2072]: E1212 18:45:46.285911 2072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0ce150326a37094e4a2a21d6c618225e30f176825437f6d74ef79cd18e2f0f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:46.286330 kubelet[2072]: E1212 18:45:46.285985 2072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0ce150326a37094e4a2a21d6c618225e30f176825437f6d74ef79cd18e2f0f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-2blbf" Dec 12 18:45:46.286330 kubelet[2072]: E1212 18:45:46.286009 2072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0ce150326a37094e4a2a21d6c618225e30f176825437f6d74ef79cd18e2f0f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-2blbf" Dec 12 18:45:46.286330 kubelet[2072]: E1212 18:45:46.286055 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-2blbf_default(9e091f2a-4a6b-45f5-b2aa-caf7f82498f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-2blbf_default(9e091f2a-4a6b-45f5-b2aa-caf7f82498f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0ce150326a37094e4a2a21d6c618225e30f176825437f6d74ef79cd18e2f0f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-2blbf" podUID="9e091f2a-4a6b-45f5-b2aa-caf7f82498f7" Dec 12 18:45:46.679238 systemd[1]: Started sshd@31-172.237.139.56:22-5.209.71.143:54074.service - OpenSSH per-connection server daemon (5.209.71.143:54074). Dec 12 18:45:46.718201 kubelet[2072]: E1212 18:45:46.718151 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:47.499795 sshd[2628]: Connection closed by 5.209.71.143 port 54074 [preauth] Dec 12 18:45:47.500935 systemd[1]: sshd@31-172.237.139.56:22-5.209.71.143:54074.service: Deactivated successfully. Dec 12 18:45:47.719218 kubelet[2072]: E1212 18:45:47.719158 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:47.856957 systemd[1]: Started sshd@32-172.237.139.56:22-46.100.178.243:24224.service - OpenSSH per-connection server daemon (46.100.178.243:24224). Dec 12 18:45:48.251992 sshd[2128]: Connection closed by 89.42.101.74 port 43162 [preauth] Dec 12 18:45:48.253716 systemd[1]: sshd@25-172.237.139.56:22-89.42.101.74:43162.service: Deactivated successfully. Dec 12 18:45:48.719910 kubelet[2072]: E1212 18:45:48.719858 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:48.943431 systemd[1]: Started sshd@33-172.237.139.56:22-66.79.102.161:15760.service - OpenSSH per-connection server daemon (66.79.102.161:15760). Dec 12 18:45:49.720282 kubelet[2072]: E1212 18:45:49.720249 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:50.472239 sshd[2634]: Connection closed by 46.100.178.243 port 24224 [preauth] Dec 12 18:45:50.475066 systemd[1]: sshd@32-172.237.139.56:22-46.100.178.243:24224.service: Deactivated successfully. Dec 12 18:45:50.675960 sshd[2640]: Connection closed by 66.79.102.161 port 15760 [preauth] Dec 12 18:45:50.678066 systemd[1]: sshd@33-172.237.139.56:22-66.79.102.161:15760.service: Deactivated successfully. Dec 12 18:45:50.721373 kubelet[2072]: E1212 18:45:50.721341 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:51.704452 kubelet[2072]: E1212 18:45:51.704326 2072 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:51.722276 kubelet[2072]: E1212 18:45:51.722205 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:52.723185 kubelet[2072]: E1212 18:45:52.723118 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:53.080293 systemd[1]: Started sshd@34-172.237.139.56:22-204.18.131.46:19555.service - OpenSSH per-connection server daemon (204.18.131.46:19555). Dec 12 18:45:53.723755 kubelet[2072]: E1212 18:45:53.723695 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:53.934274 sshd[2648]: Connection closed by 204.18.131.46 port 19555 [preauth] Dec 12 18:45:53.936317 systemd[1]: sshd@34-172.237.139.56:22-204.18.131.46:19555.service: Deactivated successfully. Dec 12 18:45:54.637067 systemd[1]: Started sshd@35-172.237.139.56:22-5.239.173.49:11617.service - OpenSSH per-connection server daemon (5.239.173.49:11617). Dec 12 18:45:54.724411 kubelet[2072]: E1212 18:45:54.724335 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:55.725077 kubelet[2072]: E1212 18:45:55.724998 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:56.003439 update_engine[1552]: I20251212 18:45:56.003137 1552 update_attempter.cc:509] Updating boot flags... Dec 12 18:45:56.082427 systemd[1]: Started sshd@36-172.237.139.56:22-2.147.59.120:63826.service - OpenSSH per-connection server daemon (2.147.59.120:63826). Dec 12 18:45:56.725872 kubelet[2072]: E1212 18:45:56.725782 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:56.801140 sshd[2671]: Connection closed by 2.147.59.120 port 63826 [preauth] Dec 12 18:45:56.803234 systemd[1]: sshd@36-172.237.139.56:22-2.147.59.120:63826.service: Deactivated successfully. Dec 12 18:45:57.362287 containerd[1574]: time="2025-12-12T18:45:57.362158265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-794fx,Uid:9d771747-d366-4e3a-b362-45818ffae2f6,Namespace:calico-system,Attempt:0,}" Dec 12 18:45:57.425122 containerd[1574]: time="2025-12-12T18:45:57.425065833Z" level=error msg="Failed to destroy network for sandbox \"529383f0f80fc45c0b025dfc8341e1af6835d30a6003d9bcd5f1d857080a1fb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:57.427893 systemd[1]: run-netns-cni\x2dcb7f6890\x2d6e5d\x2d1fbc\x2ddea1\x2dd186c205d1f8.mount: Deactivated successfully. Dec 12 18:45:57.428556 containerd[1574]: time="2025-12-12T18:45:57.428220755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-794fx,Uid:9d771747-d366-4e3a-b362-45818ffae2f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"529383f0f80fc45c0b025dfc8341e1af6835d30a6003d9bcd5f1d857080a1fb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:57.429698 kubelet[2072]: E1212 18:45:57.429636 2072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"529383f0f80fc45c0b025dfc8341e1af6835d30a6003d9bcd5f1d857080a1fb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:45:57.429769 kubelet[2072]: E1212 18:45:57.429724 2072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"529383f0f80fc45c0b025dfc8341e1af6835d30a6003d9bcd5f1d857080a1fb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:57.429801 kubelet[2072]: E1212 18:45:57.429762 2072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"529383f0f80fc45c0b025dfc8341e1af6835d30a6003d9bcd5f1d857080a1fb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-794fx" Dec 12 18:45:57.430055 kubelet[2072]: E1212 18:45:57.429977 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-794fx_calico-system(9d771747-d366-4e3a-b362-45818ffae2f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-794fx_calico-system(9d771747-d366-4e3a-b362-45818ffae2f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"529383f0f80fc45c0b025dfc8341e1af6835d30a6003d9bcd5f1d857080a1fb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:45:57.686361 systemd[1]: Started sshd@37-172.237.139.56:22-202.191.105.23:62648.service - OpenSSH per-connection server daemon (202.191.105.23:62648). Dec 12 18:45:57.725958 kubelet[2072]: E1212 18:45:57.725922 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:58.723882 sshd[2710]: Connection closed by 202.191.105.23 port 62648 [preauth] Dec 12 18:45:58.726196 systemd[1]: sshd@37-172.237.139.56:22-202.191.105.23:62648.service: Deactivated successfully. Dec 12 18:45:58.726736 kubelet[2072]: E1212 18:45:58.726394 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:59.727114 kubelet[2072]: E1212 18:45:59.727046 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:45:59.940300 systemd[1]: Started sshd@38-172.237.139.56:22-37.254.15.0:41712.service - OpenSSH per-connection server daemon (37.254.15.0:41712). Dec 12 18:46:00.141645 systemd[1]: Started sshd@39-172.237.139.56:22-5.209.205.40:63997.service - OpenSSH per-connection server daemon (5.209.205.40:63997). Dec 12 18:46:00.361355 kubelet[2072]: E1212 18:46:00.361281 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:46:00.365265 containerd[1574]: time="2025-12-12T18:46:00.365036122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:46:00.654955 sshd[2716]: Connection closed by 37.254.15.0 port 41712 [preauth] Dec 12 18:46:00.657512 systemd[1]: sshd@38-172.237.139.56:22-37.254.15.0:41712.service: Deactivated successfully. Dec 12 18:46:00.728203 kubelet[2072]: E1212 18:46:00.728003 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:01.283190 sshd[2720]: Connection closed by 5.209.205.40 port 63997 [preauth] Dec 12 18:46:01.286479 systemd[1]: sshd@39-172.237.139.56:22-5.209.205.40:63997.service: Deactivated successfully. Dec 12 18:46:01.366173 containerd[1574]: time="2025-12-12T18:46:01.365408009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2blbf,Uid:9e091f2a-4a6b-45f5-b2aa-caf7f82498f7,Namespace:default,Attempt:0,}" Dec 12 18:46:01.531061 containerd[1574]: time="2025-12-12T18:46:01.530987401Z" level=error msg="Failed to destroy network for sandbox \"148b74d828fe0fe7fc19cf259592b6f725f6b20686debf7c69cd70e44ee054bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:01.533073 systemd[1]: run-netns-cni\x2dc865180d\x2dc18e\x2d7235\x2daa53\x2df95d2b51cd64.mount: Deactivated successfully. Dec 12 18:46:01.536610 containerd[1574]: time="2025-12-12T18:46:01.536566114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2blbf,Uid:9e091f2a-4a6b-45f5-b2aa-caf7f82498f7,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"148b74d828fe0fe7fc19cf259592b6f725f6b20686debf7c69cd70e44ee054bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:01.538285 kubelet[2072]: E1212 18:46:01.538208 2072 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148b74d828fe0fe7fc19cf259592b6f725f6b20686debf7c69cd70e44ee054bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:01.538506 kubelet[2072]: E1212 18:46:01.538438 2072 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148b74d828fe0fe7fc19cf259592b6f725f6b20686debf7c69cd70e44ee054bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-2blbf" Dec 12 18:46:01.538506 kubelet[2072]: E1212 18:46:01.538510 2072 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148b74d828fe0fe7fc19cf259592b6f725f6b20686debf7c69cd70e44ee054bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-2blbf" Dec 12 18:46:01.539909 kubelet[2072]: E1212 18:46:01.539019 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-2blbf_default(9e091f2a-4a6b-45f5-b2aa-caf7f82498f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-2blbf_default(9e091f2a-4a6b-45f5-b2aa-caf7f82498f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"148b74d828fe0fe7fc19cf259592b6f725f6b20686debf7c69cd70e44ee054bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-2blbf" podUID="9e091f2a-4a6b-45f5-b2aa-caf7f82498f7" Dec 12 18:46:01.730469 kubelet[2072]: E1212 18:46:01.730386 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:02.740256 kubelet[2072]: E1212 18:46:02.740072 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:03.569187 systemd[1]: Started sshd@40-172.237.139.56:22-164.215.136.5:44370.service - OpenSSH per-connection server daemon (164.215.136.5:44370). Dec 12 18:46:03.741261 kubelet[2072]: E1212 18:46:03.741224 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:04.280439 systemd[1]: Started sshd@41-172.237.139.56:22-151.145.54.65:37806.service - OpenSSH per-connection server daemon (151.145.54.65:37806). Dec 12 18:46:04.742913 kubelet[2072]: E1212 18:46:04.742668 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:05.610882 sshd[2760]: Connection closed by 164.215.136.5 port 44370 [preauth] Dec 12 18:46:05.609131 systemd[1]: sshd@40-172.237.139.56:22-164.215.136.5:44370.service: Deactivated successfully. Dec 12 18:46:05.654258 systemd[1]: Started sshd@42-172.237.139.56:22-217.218.153.240:28155.service - OpenSSH per-connection server daemon (217.218.153.240:28155). Dec 12 18:46:05.744962 kubelet[2072]: E1212 18:46:05.744689 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:06.731754 systemd[1]: Started sshd@43-172.237.139.56:22-151.234.224.44:55094.service - OpenSSH per-connection server daemon (151.234.224.44:55094). Dec 12 18:46:06.746476 sshd[2770]: Connection closed by 217.218.153.240 port 28155 [preauth] Dec 12 18:46:06.747209 kubelet[2072]: E1212 18:46:06.747153 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:06.747329 systemd[1]: sshd@42-172.237.139.56:22-217.218.153.240:28155.service: Deactivated successfully. Dec 12 18:46:07.485397 sshd[2774]: Connection closed by 151.234.224.44 port 55094 [preauth] Dec 12 18:46:07.489069 systemd[1]: sshd@43-172.237.139.56:22-151.234.224.44:55094.service: Deactivated successfully. Dec 12 18:46:07.545268 systemd[1]: Started sshd@44-172.237.139.56:22-80.83.235.6:15163.service - OpenSSH per-connection server daemon (80.83.235.6:15163). Dec 12 18:46:07.733438 systemd[1]: Started sshd@45-172.237.139.56:22-5.250.58.77:63815.service - OpenSSH per-connection server daemon (5.250.58.77:63815). Dec 12 18:46:07.749121 kubelet[2072]: E1212 18:46:07.748236 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:07.953343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635677814.mount: Deactivated successfully. Dec 12 18:46:07.986008 containerd[1574]: time="2025-12-12T18:46:07.985910772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:07.988413 containerd[1574]: time="2025-12-12T18:46:07.988376433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:46:07.989861 containerd[1574]: time="2025-12-12T18:46:07.989189953Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:07.991160 containerd[1574]: time="2025-12-12T18:46:07.991112914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:07.991874 containerd[1574]: time="2025-12-12T18:46:07.991831104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.626716232s" Dec 12 18:46:07.992004 containerd[1574]: time="2025-12-12T18:46:07.991986714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:46:08.014520 containerd[1574]: time="2025-12-12T18:46:08.014234750Z" level=info msg="CreateContainer within sandbox \"4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:46:08.027045 containerd[1574]: time="2025-12-12T18:46:08.025804334Z" level=info msg="Container a41ed47f4b8285ecae7b87063ab7e9479fe7e824433af73481f91c2e235b0ccd: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:08.036066 containerd[1574]: time="2025-12-12T18:46:08.036011187Z" level=info msg="CreateContainer within sandbox \"4b9d05346728077b8645b7a52cf5c44b18e6ed58ff830d2af920801b8000c913\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a41ed47f4b8285ecae7b87063ab7e9479fe7e824433af73481f91c2e235b0ccd\"" Dec 12 18:46:08.037099 containerd[1574]: time="2025-12-12T18:46:08.037069147Z" level=info msg="StartContainer for \"a41ed47f4b8285ecae7b87063ab7e9479fe7e824433af73481f91c2e235b0ccd\"" Dec 12 18:46:08.039060 containerd[1574]: time="2025-12-12T18:46:08.039030807Z" level=info msg="connecting to shim a41ed47f4b8285ecae7b87063ab7e9479fe7e824433af73481f91c2e235b0ccd" address="unix:///run/containerd/s/9352a56094d05c5b301b89ba90ad2ebebf23b5d08fd4d7ce00bf763a799c145b" protocol=ttrpc version=3 Dec 12 18:46:08.217891 sshd[2764]: Connection closed by 151.145.54.65 port 37806 [preauth] Dec 12 18:46:08.228020 systemd[1]: Started cri-containerd-a41ed47f4b8285ecae7b87063ab7e9479fe7e824433af73481f91c2e235b0ccd.scope - libcontainer container a41ed47f4b8285ecae7b87063ab7e9479fe7e824433af73481f91c2e235b0ccd. Dec 12 18:46:08.228619 systemd[1]: sshd@41-172.237.139.56:22-151.145.54.65:37806.service: Deactivated successfully. Dec 12 18:46:08.379452 containerd[1574]: time="2025-12-12T18:46:08.379332312Z" level=info msg="StartContainer for \"a41ed47f4b8285ecae7b87063ab7e9479fe7e824433af73481f91c2e235b0ccd\" returns successfully" Dec 12 18:46:08.563017 kubelet[2072]: E1212 18:46:08.562738 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:46:08.572062 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:46:08.572247 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:46:08.582893 kubelet[2072]: I1212 18:46:08.582769 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9fdp9" podStartSLOduration=2.7182921799999997 podStartE2EDuration="36.582666899s" podCreationTimestamp="2025-12-12 18:45:32 +0000 UTC" firstStartedPulling="2025-12-12 18:45:34.129305796 +0000 UTC m=+2.765600261" lastFinishedPulling="2025-12-12 18:46:07.993680515 +0000 UTC m=+36.629974980" observedRunningTime="2025-12-12 18:46:08.579676928 +0000 UTC m=+37.215971393" watchObservedRunningTime="2025-12-12 18:46:08.582666899 +0000 UTC m=+37.218961364" Dec 12 18:46:08.739362 sshd[2787]: Connection closed by 5.250.58.77 port 63815 [preauth] Dec 12 18:46:08.742189 systemd[1]: sshd@45-172.237.139.56:22-5.250.58.77:63815.service: Deactivated successfully. Dec 12 18:46:08.748732 kubelet[2072]: E1212 18:46:08.748694 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:09.564788 kubelet[2072]: E1212 18:46:09.564303 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:46:09.749763 kubelet[2072]: E1212 18:46:09.749702 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:10.038083 systemd[1]: Started sshd@46-172.237.139.56:22-188.241.218.191:61050.service - OpenSSH per-connection server daemon (188.241.218.191:61050). Dec 12 18:46:10.308060 systemd[1]: Started sshd@47-172.237.139.56:22-65.109.208.123:38964.service - OpenSSH per-connection server daemon (65.109.208.123:38964). Dec 12 18:46:10.362502 containerd[1574]: time="2025-12-12T18:46:10.362453685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-794fx,Uid:9d771747-d366-4e3a-b362-45818ffae2f6,Namespace:calico-system,Attempt:0,}" Dec 12 18:46:10.830662 kubelet[2072]: E1212 18:46:10.829590 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:11.105660 systemd-networkd[1477]: calibdaa8153ca3: Link UP Dec 12 18:46:11.106735 systemd-networkd[1477]: calibdaa8153ca3: Gained carrier Dec 12 18:46:11.131506 containerd[1574]: 2025-12-12 18:46:10.486 [INFO][3002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:46:11.131506 containerd[1574]: 2025-12-12 18:46:10.504 [INFO][3002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.177.56-k8s-csi--node--driver--794fx-eth0 csi-node-driver- calico-system 9d771747-d366-4e3a-b362-45818ffae2f6 1473 0 2025-12-12 18:45:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 192.168.177.56 csi-node-driver-794fx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibdaa8153ca3 [] [] }} ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Namespace="calico-system" Pod="csi-node-driver-794fx" WorkloadEndpoint="192.168.177.56-k8s-csi--node--driver--794fx-" Dec 12 18:46:11.131506 containerd[1574]: 2025-12-12 18:46:10.829 [INFO][3002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Namespace="calico-system" Pod="csi-node-driver-794fx" WorkloadEndpoint="192.168.177.56-k8s-csi--node--driver--794fx-eth0" Dec 12 18:46:11.131506 containerd[1574]: 2025-12-12 18:46:10.982 [INFO][3016] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" HandleID="k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Workload="192.168.177.56-k8s-csi--node--driver--794fx-eth0" Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:10.982 [INFO][3016] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" HandleID="k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Workload="192.168.177.56-k8s-csi--node--driver--794fx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000314df0), Attrs:map[string]string{"namespace":"calico-system", "node":"192.168.177.56", "pod":"csi-node-driver-794fx", "timestamp":"2025-12-12 18:46:10.982369507 +0000 UTC"}, Hostname:"192.168.177.56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:10.983 [INFO][3016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:10.983 [INFO][3016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:10.983 [INFO][3016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.177.56' Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:11.018 [INFO][3016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" host="192.168.177.56" Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:11.026 [INFO][3016] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.177.56" Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:11.034 [INFO][3016] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:11.036 [INFO][3016] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:11.038 [INFO][3016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:11.131787 containerd[1574]: 2025-12-12 18:46:11.038 [INFO][3016] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" host="192.168.177.56" Dec 12 18:46:11.132293 containerd[1574]: 2025-12-12 18:46:11.042 [INFO][3016] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b Dec 12 18:46:11.132293 containerd[1574]: 2025-12-12 18:46:11.046 [INFO][3016] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" host="192.168.177.56" Dec 12 18:46:11.132293 containerd[1574]: 2025-12-12 18:46:11.050 [INFO][3016] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.129/26] block=192.168.93.128/26 handle="k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" host="192.168.177.56" Dec 12 18:46:11.132293 containerd[1574]: 2025-12-12 18:46:11.050 [INFO][3016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.129/26] handle="k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" host="192.168.177.56" Dec 12 18:46:11.132293 containerd[1574]: 2025-12-12 18:46:11.051 [INFO][3016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:46:11.132293 containerd[1574]: 2025-12-12 18:46:11.051 [INFO][3016] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.129/26] IPv6=[] ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" HandleID="k8s-pod-network.ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Workload="192.168.177.56-k8s-csi--node--driver--794fx-eth0" Dec 12 18:46:11.132425 containerd[1574]: 2025-12-12 18:46:11.059 [INFO][3002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Namespace="calico-system" Pod="csi-node-driver-794fx" WorkloadEndpoint="192.168.177.56-k8s-csi--node--driver--794fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.177.56-k8s-csi--node--driver--794fx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d771747-d366-4e3a-b362-45818ffae2f6", ResourceVersion:"1473", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.177.56", ContainerID:"", Pod:"csi-node-driver-794fx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdaa8153ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:46:11.132498 containerd[1574]: 2025-12-12 18:46:11.059 [INFO][3002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.129/32] ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Namespace="calico-system" Pod="csi-node-driver-794fx" WorkloadEndpoint="192.168.177.56-k8s-csi--node--driver--794fx-eth0" Dec 12 18:46:11.132498 containerd[1574]: 2025-12-12 18:46:11.059 [INFO][3002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdaa8153ca3 ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Namespace="calico-system" Pod="csi-node-driver-794fx" WorkloadEndpoint="192.168.177.56-k8s-csi--node--driver--794fx-eth0" Dec 12 18:46:11.132498 containerd[1574]: 2025-12-12 18:46:11.106 [INFO][3002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Namespace="calico-system" Pod="csi-node-driver-794fx" WorkloadEndpoint="192.168.177.56-k8s-csi--node--driver--794fx-eth0" Dec 12 18:46:11.132577 containerd[1574]: 2025-12-12 18:46:11.111 [INFO][3002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Namespace="calico-system" Pod="csi-node-driver-794fx" WorkloadEndpoint="192.168.177.56-k8s-csi--node--driver--794fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.177.56-k8s-csi--node--driver--794fx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d771747-d366-4e3a-b362-45818ffae2f6", ResourceVersion:"1473", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.177.56", ContainerID:"ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b", Pod:"csi-node-driver-794fx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdaa8153ca3", MAC:"9a:64:2f:9f:f1:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:46:11.132629 containerd[1574]: 2025-12-12 18:46:11.125 [INFO][3002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" Namespace="calico-system" Pod="csi-node-driver-794fx" WorkloadEndpoint="192.168.177.56-k8s-csi--node--driver--794fx-eth0" Dec 12 18:46:11.185540 containerd[1574]: time="2025-12-12T18:46:11.185415874Z" level=info msg="connecting to shim ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b" address="unix:///run/containerd/s/a04f739ee875a84ea534911b781f7704c9eb5dd6f6f78f5342ecc93615c57eed" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:11.279135 systemd[1]: Started cri-containerd-ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b.scope - libcontainer container ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b. Dec 12 18:46:11.596010 containerd[1574]: time="2025-12-12T18:46:11.595960908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-794fx,Uid:9d771747-d366-4e3a-b362-45818ffae2f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee6549762b700c473b1aed402df83b2c054339db9ec302a3ee82cc346af54e5b\"" Dec 12 18:46:11.598265 containerd[1574]: time="2025-12-12T18:46:11.598242609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:46:11.704116 systemd-networkd[1477]: vxlan.calico: Link UP Dec 12 18:46:11.704128 systemd-networkd[1477]: vxlan.calico: Gained carrier Dec 12 18:46:11.705631 kubelet[2072]: E1212 18:46:11.705526 2072 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:11.739022 containerd[1574]: time="2025-12-12T18:46:11.736981200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:11.739675 containerd[1574]: time="2025-12-12T18:46:11.739633561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:46:11.739762 containerd[1574]: time="2025-12-12T18:46:11.739743411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:46:11.741930 kubelet[2072]: E1212 18:46:11.739962 2072 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:46:11.741930 kubelet[2072]: E1212 18:46:11.740052 2072 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:46:11.741930 kubelet[2072]: E1212 18:46:11.740392 2072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q89j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-794fx_calico-system(9d771747-d366-4e3a-b362-45818ffae2f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:11.744236 containerd[1574]: time="2025-12-12T18:46:11.744199212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:46:11.828263 sshd[2967]: Connection closed by 188.241.218.191 port 61050 [preauth] Dec 12 18:46:11.830761 kubelet[2072]: E1212 18:46:11.830673 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:11.837467 systemd[1]: sshd@46-172.237.139.56:22-188.241.218.191:61050.service: Deactivated successfully. Dec 12 18:46:11.880097 containerd[1574]: time="2025-12-12T18:46:11.878029883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:11.881403 containerd[1574]: time="2025-12-12T18:46:11.881347303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:46:11.881645 containerd[1574]: time="2025-12-12T18:46:11.881622824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:46:11.882233 kubelet[2072]: E1212 18:46:11.882173 2072 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:46:11.882363 kubelet[2072]: E1212 18:46:11.882345 2072 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:46:11.882914 kubelet[2072]: E1212 18:46:11.882874 2072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q89j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-794fx_calico-system(9d771747-d366-4e3a-b362-45818ffae2f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:11.884386 kubelet[2072]: E1212 18:46:11.884323 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:46:12.191503 sshd[2998]: Connection closed by 65.109.208.123 port 38964 [preauth] Dec 12 18:46:12.193128 systemd[1]: sshd@47-172.237.139.56:22-65.109.208.123:38964.service: Deactivated successfully. Dec 12 18:46:12.365082 containerd[1574]: time="2025-12-12T18:46:12.364981309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2blbf,Uid:9e091f2a-4a6b-45f5-b2aa-caf7f82498f7,Namespace:default,Attempt:0,}" Dec 12 18:46:12.501625 systemd-networkd[1477]: cali4512ff24ca8: Link UP Dec 12 18:46:12.502005 systemd-networkd[1477]: cali4512ff24ca8: Gained carrier Dec 12 18:46:12.528368 containerd[1574]: 2025-12-12 18:46:12.420 [INFO][3189] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0 nginx-deployment-7fcdb87857- default 9e091f2a-4a6b-45f5-b2aa-caf7f82498f7 1577 0 2025-12-12 18:45:45 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 192.168.177.56 nginx-deployment-7fcdb87857-2blbf eth0 default [] [] [kns.default ksa.default.default] cali4512ff24ca8 [] [] }} ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Namespace="default" Pod="nginx-deployment-7fcdb87857-2blbf" WorkloadEndpoint="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-" Dec 12 18:46:12.528368 containerd[1574]: 2025-12-12 18:46:12.420 [INFO][3189] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Namespace="default" Pod="nginx-deployment-7fcdb87857-2blbf" WorkloadEndpoint="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" Dec 12 18:46:12.528368 containerd[1574]: 2025-12-12 18:46:12.453 [INFO][3200] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" HandleID="k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Workload="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.454 [INFO][3200] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" HandleID="k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Workload="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"default", "node":"192.168.177.56", "pod":"nginx-deployment-7fcdb87857-2blbf", "timestamp":"2025-12-12 18:46:12.453673188 +0000 UTC"}, Hostname:"192.168.177.56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.454 [INFO][3200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.454 [INFO][3200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.454 [INFO][3200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.177.56' Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.464 [INFO][3200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" host="192.168.177.56" Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.468 [INFO][3200] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.177.56" Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.472 [INFO][3200] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.474 [INFO][3200] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:12.533902 containerd[1574]: 2025-12-12 18:46:12.476 [INFO][3200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:12.534182 containerd[1574]: 2025-12-12 18:46:12.476 [INFO][3200] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" host="192.168.177.56" Dec 12 18:46:12.534182 containerd[1574]: 2025-12-12 18:46:12.479 [INFO][3200] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae Dec 12 18:46:12.534182 containerd[1574]: 2025-12-12 18:46:12.486 [INFO][3200] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" host="192.168.177.56" Dec 12 18:46:12.534182 containerd[1574]: 2025-12-12 18:46:12.491 [INFO][3200] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.130/26] block=192.168.93.128/26 handle="k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" host="192.168.177.56" Dec 12 18:46:12.534182 containerd[1574]: 2025-12-12 18:46:12.491 [INFO][3200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.130/26] handle="k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" host="192.168.177.56" Dec 12 18:46:12.534182 containerd[1574]: 2025-12-12 18:46:12.491 [INFO][3200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:46:12.534182 containerd[1574]: 2025-12-12 18:46:12.491 [INFO][3200] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.130/26] IPv6=[] ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" HandleID="k8s-pod-network.7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Workload="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" Dec 12 18:46:12.534331 containerd[1574]: 2025-12-12 18:46:12.494 [INFO][3189] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Namespace="default" Pod="nginx-deployment-7fcdb87857-2blbf" WorkloadEndpoint="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"9e091f2a-4a6b-45f5-b2aa-caf7f82498f7", ResourceVersion:"1577", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.177.56", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-2blbf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4512ff24ca8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:46:12.534331 containerd[1574]: 2025-12-12 18:46:12.494 [INFO][3189] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.130/32] ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Namespace="default" Pod="nginx-deployment-7fcdb87857-2blbf" WorkloadEndpoint="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" Dec 12 18:46:12.534423 containerd[1574]: 2025-12-12 18:46:12.494 [INFO][3189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4512ff24ca8 ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Namespace="default" Pod="nginx-deployment-7fcdb87857-2blbf" WorkloadEndpoint="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" Dec 12 18:46:12.534423 containerd[1574]: 2025-12-12 18:46:12.501 [INFO][3189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Namespace="default" Pod="nginx-deployment-7fcdb87857-2blbf" WorkloadEndpoint="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" Dec 12 18:46:12.534465 containerd[1574]: 2025-12-12 18:46:12.504 [INFO][3189] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Namespace="default" Pod="nginx-deployment-7fcdb87857-2blbf" WorkloadEndpoint="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"9e091f2a-4a6b-45f5-b2aa-caf7f82498f7", ResourceVersion:"1577", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.177.56", ContainerID:"7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae", Pod:"nginx-deployment-7fcdb87857-2blbf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4512ff24ca8", MAC:"f6:2c:fa:b9:ec:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:46:12.534529 containerd[1574]: 2025-12-12 18:46:12.517 [INFO][3189] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" Namespace="default" Pod="nginx-deployment-7fcdb87857-2blbf" WorkloadEndpoint="192.168.177.56-k8s-nginx--deployment--7fcdb87857--2blbf-eth0" Dec 12 18:46:12.573890 containerd[1574]: time="2025-12-12T18:46:12.573498824Z" level=info msg="connecting to shim 7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae" address="unix:///run/containerd/s/087582fda754c22a2f0fc6a4698928d512f068991ad73863d03a08b3da8e891a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:12.583987 systemd-networkd[1477]: calibdaa8153ca3: Gained IPv6LL Dec 12 18:46:12.634022 systemd[1]: Started cri-containerd-7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae.scope - libcontainer container 7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae. Dec 12 18:46:12.726819 containerd[1574]: time="2025-12-12T18:46:12.726577387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2blbf,Uid:9e091f2a-4a6b-45f5-b2aa-caf7f82498f7,Namespace:default,Attempt:0,} returns sandbox id \"7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae\"" Dec 12 18:46:12.734355 containerd[1574]: time="2025-12-12T18:46:12.734279829Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 12 18:46:12.832255 kubelet[2072]: E1212 18:46:12.832058 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:12.854396 kubelet[2072]: E1212 18:46:12.854255 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:46:13.197636 systemd[1]: Started sshd@48-172.237.139.56:22-5.216.192.245:60996.service - OpenSSH per-connection server daemon (5.216.192.245:60996). Dec 12 18:46:13.937923 kubelet[2072]: E1212 18:46:13.932209 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:13.941475 systemd-networkd[1477]: vxlan.calico: Gained IPv6LL Dec 12 18:46:14.120421 systemd-networkd[1477]: cali4512ff24ca8: Gained IPv6LL Dec 12 18:46:14.794219 sshd[3265]: Connection closed by 5.216.192.245 port 60996 [preauth] Dec 12 18:46:14.802450 systemd[1]: sshd@48-172.237.139.56:22-5.216.192.245:60996.service: Deactivated successfully. Dec 12 18:46:14.933295 kubelet[2072]: E1212 18:46:14.933139 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:15.737136 systemd[1]: Started sshd@49-172.237.139.56:22-95.25.93.186:10729.service - OpenSSH per-connection server daemon (95.25.93.186:10729). Dec 12 18:46:15.933976 sshd[3276]: Connection closed by 95.25.93.186 port 10729 [preauth] Dec 12 18:46:15.935275 kubelet[2072]: E1212 18:46:15.934340 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:15.937348 systemd[1]: sshd@49-172.237.139.56:22-95.25.93.186:10729.service: Deactivated successfully. Dec 12 18:46:15.977945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828387170.mount: Deactivated successfully. Dec 12 18:46:16.935388 kubelet[2072]: E1212 18:46:16.935322 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:17.936362 kubelet[2072]: E1212 18:46:17.936301 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:18.181602 containerd[1574]: time="2025-12-12T18:46:18.180361783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:18.181602 containerd[1574]: time="2025-12-12T18:46:18.181232433Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73312336" Dec 12 18:46:18.181602 containerd[1574]: time="2025-12-12T18:46:18.181537743Z" level=info msg="ImageCreate event name:\"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:18.183795 containerd[1574]: time="2025-12-12T18:46:18.183759723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:3db8be616067ff6bd4534d63c0a1427862e285068488ddccf319982871e49aac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:18.184972 containerd[1574]: time="2025-12-12T18:46:18.184943443Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:3db8be616067ff6bd4534d63c0a1427862e285068488ddccf319982871e49aac\", size \"73312214\" in 5.450605894s" Dec 12 18:46:18.185085 containerd[1574]: time="2025-12-12T18:46:18.185064813Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\"" Dec 12 18:46:18.190553 containerd[1574]: time="2025-12-12T18:46:18.190194494Z" level=info msg="CreateContainer within sandbox \"7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 12 18:46:18.201336 containerd[1574]: time="2025-12-12T18:46:18.199819566Z" level=info msg="Container 7396b4d025eb3b61badc7de76f2116462f0209b86e446fd28eccd8c5861a9b92: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:18.207535 containerd[1574]: time="2025-12-12T18:46:18.207509557Z" level=info msg="CreateContainer within sandbox \"7f40b641c8e08fe9e7aaa181441f751c0a8850de0abac83107e56b2b93bebaae\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7396b4d025eb3b61badc7de76f2116462f0209b86e446fd28eccd8c5861a9b92\"" Dec 12 18:46:18.208070 containerd[1574]: time="2025-12-12T18:46:18.208049457Z" level=info msg="StartContainer for \"7396b4d025eb3b61badc7de76f2116462f0209b86e446fd28eccd8c5861a9b92\"" Dec 12 18:46:18.209073 containerd[1574]: time="2025-12-12T18:46:18.209019287Z" level=info msg="connecting to shim 7396b4d025eb3b61badc7de76f2116462f0209b86e446fd28eccd8c5861a9b92" address="unix:///run/containerd/s/087582fda754c22a2f0fc6a4698928d512f068991ad73863d03a08b3da8e891a" protocol=ttrpc version=3 Dec 12 18:46:18.271171 systemd[1]: Started cri-containerd-7396b4d025eb3b61badc7de76f2116462f0209b86e446fd28eccd8c5861a9b92.scope - libcontainer container 7396b4d025eb3b61badc7de76f2116462f0209b86e446fd28eccd8c5861a9b92. Dec 12 18:46:18.329780 containerd[1574]: time="2025-12-12T18:46:18.329735895Z" level=info msg="StartContainer for \"7396b4d025eb3b61badc7de76f2116462f0209b86e446fd28eccd8c5861a9b92\" returns successfully" Dec 12 18:46:18.937030 kubelet[2072]: E1212 18:46:18.936955 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:19.937886 kubelet[2072]: E1212 18:46:19.937713 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:20.938030 kubelet[2072]: E1212 18:46:20.937942 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:21.078674 systemd[1]: Started sshd@50-172.237.139.56:22-5.212.208.241:34860.service - OpenSSH per-connection server daemon (5.212.208.241:34860). Dec 12 18:46:21.705964 systemd[1]: Started sshd@51-172.237.139.56:22-57.128.191.241:37354.service - OpenSSH per-connection server daemon (57.128.191.241:37354). Dec 12 18:46:21.709361 systemd[1]: Started sshd@52-172.237.139.56:22-5.216.43.83:57670.service - OpenSSH per-connection server daemon (5.216.43.83:57670). Dec 12 18:46:21.939022 kubelet[2072]: E1212 18:46:21.938936 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:22.519498 systemd[1]: Started sshd@53-172.237.139.56:22-2.147.58.149:28366.service - OpenSSH per-connection server daemon (2.147.58.149:28366). Dec 12 18:46:22.807223 sshd[3365]: Connection closed by 5.212.208.241 port 34860 [preauth] Dec 12 18:46:22.806294 systemd[1]: sshd@50-172.237.139.56:22-5.212.208.241:34860.service: Deactivated successfully. Dec 12 18:46:22.941378 kubelet[2072]: E1212 18:46:22.941283 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:23.134720 systemd[1]: Started sshd@54-172.237.139.56:22-91.251.28.246:2936.service - OpenSSH per-connection server daemon (91.251.28.246:2936). Dec 12 18:46:23.280189 kubelet[2072]: I1212 18:46:23.280062 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-2blbf" podStartSLOduration=32.823933515 podStartE2EDuration="38.280005201s" podCreationTimestamp="2025-12-12 18:45:45 +0000 UTC" firstStartedPulling="2025-12-12 18:46:12.730477538 +0000 UTC m=+41.366772003" lastFinishedPulling="2025-12-12 18:46:18.186549224 +0000 UTC m=+46.822843689" observedRunningTime="2025-12-12 18:46:19.017279875 +0000 UTC m=+47.653574340" watchObservedRunningTime="2025-12-12 18:46:23.280005201 +0000 UTC m=+51.916299666" Dec 12 18:46:23.299890 systemd[1]: Created slice kubepods-besteffort-podca82d2a2_31f2_4788_957f_6f278671fb8e.slice - libcontainer container kubepods-besteffort-podca82d2a2_31f2_4788_957f_6f278671fb8e.slice. Dec 12 18:46:23.397296 kubelet[2072]: I1212 18:46:23.397238 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ca82d2a2-31f2-4788-957f-6f278671fb8e-data\") pod \"nfs-server-provisioner-0\" (UID: \"ca82d2a2-31f2-4788-957f-6f278671fb8e\") " pod="default/nfs-server-provisioner-0" Dec 12 18:46:23.397479 kubelet[2072]: I1212 18:46:23.397329 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-949mc\" (UniqueName: \"kubernetes.io/projected/ca82d2a2-31f2-4788-957f-6f278671fb8e-kube-api-access-949mc\") pod \"nfs-server-provisioner-0\" (UID: \"ca82d2a2-31f2-4788-957f-6f278671fb8e\") " pod="default/nfs-server-provisioner-0" Dec 12 18:46:23.464521 sshd[3370]: Connection closed by 57.128.191.241 port 37354 [preauth] Dec 12 18:46:23.466277 systemd[1]: sshd@51-172.237.139.56:22-57.128.191.241:37354.service: Deactivated successfully. Dec 12 18:46:23.604705 containerd[1574]: time="2025-12-12T18:46:23.604588724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ca82d2a2-31f2-4788-957f-6f278671fb8e,Namespace:default,Attempt:0,}" Dec 12 18:46:23.796042 sshd[3383]: Connection closed by 2.147.58.149 port 28366 [preauth] Dec 12 18:46:23.805547 systemd[1]: sshd@53-172.237.139.56:22-2.147.58.149:28366.service: Deactivated successfully. Dec 12 18:46:23.812169 systemd[1]: Started sshd@55-172.237.139.56:22-40.160.228.61:58820.service - OpenSSH per-connection server daemon (40.160.228.61:58820). Dec 12 18:46:23.885560 systemd-networkd[1477]: cali60e51b789ff: Link UP Dec 12 18:46:23.887371 systemd-networkd[1477]: cali60e51b789ff: Gained carrier Dec 12 18:46:23.905350 containerd[1574]: 2025-12-12 18:46:23.670 [INFO][3401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.177.56-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default ca82d2a2-31f2-4788-957f-6f278671fb8e 1763 0 2025-12-12 18:46:23 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 192.168.177.56 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="192.168.177.56-k8s-nfs--server--provisioner--0-" Dec 12 18:46:23.905350 containerd[1574]: 2025-12-12 18:46:23.671 [INFO][3401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:46:23.905350 containerd[1574]: 2025-12-12 18:46:23.751 [INFO][3414] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" HandleID="k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Workload="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.751 [INFO][3414] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" HandleID="k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Workload="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"default", "node":"192.168.177.56", "pod":"nfs-server-provisioner-0", "timestamp":"2025-12-12 18:46:23.75103162 +0000 UTC"}, Hostname:"192.168.177.56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.751 [INFO][3414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.751 [INFO][3414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.751 [INFO][3414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.177.56' Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.808 [INFO][3414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" host="192.168.177.56" Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.822 [INFO][3414] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.177.56" Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.828 [INFO][3414] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.832 [INFO][3414] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.849 [INFO][3414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:23.905563 containerd[1574]: 2025-12-12 18:46:23.850 [INFO][3414] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" host="192.168.177.56" Dec 12 18:46:23.905999 containerd[1574]: 2025-12-12 18:46:23.857 [INFO][3414] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13 Dec 12 18:46:23.905999 containerd[1574]: 2025-12-12 18:46:23.864 [INFO][3414] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" host="192.168.177.56" Dec 12 18:46:23.905999 containerd[1574]: 2025-12-12 18:46:23.876 [INFO][3414] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.131/26] block=192.168.93.128/26 handle="k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" host="192.168.177.56" Dec 12 18:46:23.905999 containerd[1574]: 2025-12-12 18:46:23.876 [INFO][3414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.131/26] handle="k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" host="192.168.177.56" Dec 12 18:46:23.905999 containerd[1574]: 2025-12-12 18:46:23.876 [INFO][3414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:46:23.905999 containerd[1574]: 2025-12-12 18:46:23.876 [INFO][3414] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.131/26] IPv6=[] ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" HandleID="k8s-pod-network.dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Workload="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:46:23.906326 containerd[1574]: 2025-12-12 18:46:23.879 [INFO][3401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.177.56-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ca82d2a2-31f2-4788-957f-6f278671fb8e", ResourceVersion:"1763", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.177.56", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:46:23.906326 containerd[1574]: 2025-12-12 18:46:23.879 [INFO][3401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.131/32] ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:46:23.906326 containerd[1574]: 2025-12-12 18:46:23.879 [INFO][3401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:46:23.906326 containerd[1574]: 2025-12-12 18:46:23.888 [INFO][3401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:46:23.906609 containerd[1574]: 2025-12-12 18:46:23.889 [INFO][3401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.177.56-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ca82d2a2-31f2-4788-957f-6f278671fb8e", ResourceVersion:"1763", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.177.56", ContainerID:"dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.93.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ba:f0:7d:c8:80:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:46:23.906609 containerd[1574]: 2025-12-12 18:46:23.903 [INFO][3401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="192.168.177.56-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:46:23.942948 kubelet[2072]: E1212 18:46:23.942905 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:23.947532 containerd[1574]: time="2025-12-12T18:46:23.947483231Z" level=info msg="connecting to shim dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13" address="unix:///run/containerd/s/1c8385a8704afde06e53614bc9ada00ada163ec17459fde10348982569efd33f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:24.175006 systemd[1]: Started cri-containerd-dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13.scope - libcontainer container dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13. Dec 12 18:46:24.274249 containerd[1574]: time="2025-12-12T18:46:24.274178184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ca82d2a2-31f2-4788-957f-6f278671fb8e,Namespace:default,Attempt:0,} returns sandbox id \"dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13\"" Dec 12 18:46:24.277500 containerd[1574]: time="2025-12-12T18:46:24.277452404Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 12 18:46:24.663531 sshd[3391]: Connection closed by 91.251.28.246 port 2936 [preauth] Dec 12 18:46:24.665859 systemd[1]: sshd@54-172.237.139.56:22-91.251.28.246:2936.service: Deactivated successfully. Dec 12 18:46:24.675167 sshd[3423]: Connection closed by 40.160.228.61 port 58820 [preauth] Dec 12 18:46:24.676433 systemd[1]: sshd@55-172.237.139.56:22-40.160.228.61:58820.service: Deactivated successfully. Dec 12 18:46:24.943723 kubelet[2072]: E1212 18:46:24.943559 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:25.452116 systemd-networkd[1477]: cali60e51b789ff: Gained IPv6LL Dec 12 18:46:25.944881 kubelet[2072]: E1212 18:46:25.944717 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:26.238419 systemd[1]: Started sshd@56-172.237.139.56:22-128.77.91.159:60248.service - OpenSSH per-connection server daemon (128.77.91.159:60248). Dec 12 18:46:26.293662 sshd[3491]: Connection closed by 128.77.91.159 port 60248 Dec 12 18:46:26.296714 systemd[1]: sshd@56-172.237.139.56:22-128.77.91.159:60248.service: Deactivated successfully. Dec 12 18:46:26.443971 systemd[1]: Started sshd@57-172.237.139.56:22-86.55.98.20:31126.service - OpenSSH per-connection server daemon (86.55.98.20:31126). Dec 12 18:46:26.978064 kubelet[2072]: E1212 18:46:26.978019 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:27.238774 sshd[3496]: Connection closed by 86.55.98.20 port 31126 [preauth] Dec 12 18:46:27.241346 systemd[1]: sshd@57-172.237.139.56:22-86.55.98.20:31126.service: Deactivated successfully. Dec 12 18:46:27.705291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913502808.mount: Deactivated successfully. Dec 12 18:46:27.980125 kubelet[2072]: E1212 18:46:27.979683 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:28.981346 kubelet[2072]: E1212 18:46:28.981048 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:29.982359 kubelet[2072]: E1212 18:46:29.982243 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:30.301132 systemd[1]: Started sshd@58-172.237.139.56:22-95.64.77.172:48522.service - OpenSSH per-connection server daemon (95.64.77.172:48522). Dec 12 18:46:30.584060 systemd[1]: Started sshd@59-172.237.139.56:22-172.80.252.141:28428.service - OpenSSH per-connection server daemon (172.80.252.141:28428). Dec 12 18:46:30.983173 kubelet[2072]: E1212 18:46:30.982951 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:31.162749 containerd[1574]: time="2025-12-12T18:46:31.162623503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:31.165031 containerd[1574]: time="2025-12-12T18:46:31.164163853Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 12 18:46:31.165571 containerd[1574]: time="2025-12-12T18:46:31.165532814Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:31.168880 containerd[1574]: time="2025-12-12T18:46:31.168689674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:31.170967 containerd[1574]: time="2025-12-12T18:46:31.170906994Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.89340248s" Dec 12 18:46:31.170967 containerd[1574]: time="2025-12-12T18:46:31.170967124Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 12 18:46:31.173463 containerd[1574]: time="2025-12-12T18:46:31.173135544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:46:31.180160 containerd[1574]: time="2025-12-12T18:46:31.180125444Z" level=info msg="CreateContainer within sandbox \"dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 12 18:46:31.195866 containerd[1574]: time="2025-12-12T18:46:31.194180695Z" level=info msg="Container 2d51f9ef08cae0183117c3ee52ff710d615edc46fa6a8dc200937dc99dd11851: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:31.197791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976938227.mount: Deactivated successfully. Dec 12 18:46:31.205676 containerd[1574]: time="2025-12-12T18:46:31.205648756Z" level=info msg="CreateContainer within sandbox \"dd702e5336313441027de0a25ab97bcecd32b1bc4cc51b51dbae744de3b86a13\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2d51f9ef08cae0183117c3ee52ff710d615edc46fa6a8dc200937dc99dd11851\"" Dec 12 18:46:31.207628 containerd[1574]: time="2025-12-12T18:46:31.207604566Z" level=info msg="StartContainer for \"2d51f9ef08cae0183117c3ee52ff710d615edc46fa6a8dc200937dc99dd11851\"" Dec 12 18:46:31.209562 containerd[1574]: time="2025-12-12T18:46:31.209535706Z" level=info msg="connecting to shim 2d51f9ef08cae0183117c3ee52ff710d615edc46fa6a8dc200937dc99dd11851" address="unix:///run/containerd/s/1c8385a8704afde06e53614bc9ada00ada163ec17459fde10348982569efd33f" protocol=ttrpc version=3 Dec 12 18:46:31.271061 systemd[1]: Started cri-containerd-2d51f9ef08cae0183117c3ee52ff710d615edc46fa6a8dc200937dc99dd11851.scope - libcontainer container 2d51f9ef08cae0183117c3ee52ff710d615edc46fa6a8dc200937dc99dd11851. Dec 12 18:46:31.324403 containerd[1574]: time="2025-12-12T18:46:31.323008093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:31.330207 containerd[1574]: time="2025-12-12T18:46:31.327346764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:46:31.330207 containerd[1574]: time="2025-12-12T18:46:31.327448304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:46:31.332068 kubelet[2072]: E1212 18:46:31.330670 2072 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:46:31.332068 kubelet[2072]: E1212 18:46:31.330880 2072 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:46:31.332697 kubelet[2072]: E1212 18:46:31.332623 2072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q89j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-794fx_calico-system(9d771747-d366-4e3a-b362-45818ffae2f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:31.336442 containerd[1574]: time="2025-12-12T18:46:31.336410835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:46:31.352316 containerd[1574]: time="2025-12-12T18:46:31.352258356Z" level=info msg="StartContainer for \"2d51f9ef08cae0183117c3ee52ff710d615edc46fa6a8dc200937dc99dd11851\" returns successfully" Dec 12 18:46:31.473802 containerd[1574]: time="2025-12-12T18:46:31.473736353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:31.475418 containerd[1574]: time="2025-12-12T18:46:31.475300543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:46:31.475418 containerd[1574]: time="2025-12-12T18:46:31.475391193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:46:31.475850 kubelet[2072]: E1212 18:46:31.475747 2072 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:46:31.475960 kubelet[2072]: E1212 18:46:31.475827 2072 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:46:31.476554 kubelet[2072]: E1212 18:46:31.476434 2072 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q89j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-794fx_calico-system(9d771747-d366-4e3a-b362-45818ffae2f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:31.478006 kubelet[2072]: E1212 18:46:31.477935 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:46:31.705079 kubelet[2072]: E1212 18:46:31.704996 2072 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:31.984253 kubelet[2072]: E1212 18:46:31.983880 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:32.226634 sshd[3371]: Connection closed by 5.216.43.83 port 57670 [preauth] Dec 12 18:46:32.230478 systemd[1]: sshd@52-172.237.139.56:22-5.216.43.83:57670.service: Deactivated successfully. Dec 12 18:46:32.985069 kubelet[2072]: E1212 18:46:32.985011 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:33.146125 systemd[1]: Started sshd@60-172.237.139.56:22-5.190.62.167:22914.service - OpenSSH per-connection server daemon (5.190.62.167:22914). Dec 12 18:46:33.359499 systemd[1]: Started sshd@61-172.237.139.56:22-46.51.16.102:8637.service - OpenSSH per-connection server daemon (46.51.16.102:8637). Dec 12 18:46:33.626561 sshd[3508]: Connection closed by 95.64.77.172 port 48522 [preauth] Dec 12 18:46:33.628932 systemd[1]: sshd@58-172.237.139.56:22-95.64.77.172:48522.service: Deactivated successfully. Dec 12 18:46:33.952867 sshd[3609]: Connection closed by 5.190.62.167 port 22914 [preauth] Dec 12 18:46:33.956493 systemd[1]: sshd@60-172.237.139.56:22-5.190.62.167:22914.service: Deactivated successfully. Dec 12 18:46:33.985399 kubelet[2072]: E1212 18:46:33.985356 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:34.258720 sshd[3613]: Connection closed by 46.51.16.102 port 8637 [preauth] Dec 12 18:46:34.261342 systemd[1]: sshd@61-172.237.139.56:22-46.51.16.102:8637.service: Deactivated successfully. Dec 12 18:46:34.986402 kubelet[2072]: E1212 18:46:34.986335 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:35.456785 systemd[1]: Started sshd@62-172.237.139.56:22-2.187.123.193:38514.service - OpenSSH per-connection server daemon (2.187.123.193:38514). Dec 12 18:46:35.987244 kubelet[2072]: E1212 18:46:35.987157 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:36.350684 sshd[3625]: Connection closed by 2.187.123.193 port 38514 [preauth] Dec 12 18:46:36.353426 systemd[1]: sshd@62-172.237.139.56:22-2.187.123.193:38514.service: Deactivated successfully. Dec 12 18:46:36.615124 kubelet[2072]: I1212 18:46:36.614636 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=6.718795011 podStartE2EDuration="13.614568681s" podCreationTimestamp="2025-12-12 18:46:23 +0000 UTC" firstStartedPulling="2025-12-12 18:46:24.276715594 +0000 UTC m=+52.913010069" lastFinishedPulling="2025-12-12 18:46:31.172489274 +0000 UTC m=+59.808783739" observedRunningTime="2025-12-12 18:46:32.060208111 +0000 UTC m=+60.696502576" watchObservedRunningTime="2025-12-12 18:46:36.614568681 +0000 UTC m=+65.250863156" Dec 12 18:46:36.624555 systemd[1]: Created slice kubepods-besteffort-pod30a126e0_9163_47ee_8eff_488828a9112f.slice - libcontainer container kubepods-besteffort-pod30a126e0_9163_47ee_8eff_488828a9112f.slice. Dec 12 18:46:36.801189 kubelet[2072]: I1212 18:46:36.801134 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbbgf\" (UniqueName: \"kubernetes.io/projected/30a126e0-9163-47ee-8eff-488828a9112f-kube-api-access-lbbgf\") pod \"test-pod-1\" (UID: \"30a126e0-9163-47ee-8eff-488828a9112f\") " pod="default/test-pod-1" Dec 12 18:46:36.801189 kubelet[2072]: I1212 18:46:36.801178 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-302a1419-bd2a-46ea-a4d1-e4c19a91ccde\" (UniqueName: \"kubernetes.io/nfs/30a126e0-9163-47ee-8eff-488828a9112f-pvc-302a1419-bd2a-46ea-a4d1-e4c19a91ccde\") pod \"test-pod-1\" (UID: \"30a126e0-9163-47ee-8eff-488828a9112f\") " pod="default/test-pod-1" Dec 12 18:46:36.948986 kernel: netfs: FS-Cache loaded Dec 12 18:46:36.988600 kubelet[2072]: E1212 18:46:36.988425 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:37.015103 kernel: RPC: Registered named UNIX socket transport module. Dec 12 18:46:37.015196 kernel: RPC: Registered udp transport module. Dec 12 18:46:37.017629 kernel: RPC: Registered tcp transport module. Dec 12 18:46:37.017669 kernel: RPC: Registered tcp-with-tls transport module. Dec 12 18:46:37.019888 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 12 18:46:37.295173 kernel: NFS: Registering the id_resolver key type Dec 12 18:46:37.295313 kernel: Key type id_resolver registered Dec 12 18:46:37.297717 kernel: Key type id_legacy registered Dec 12 18:46:37.365323 nfsidmap[3656]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Dec 12 18:46:37.368611 nfsidmap[3656]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 12 18:46:37.372775 nfsidmap[3657]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Dec 12 18:46:37.373029 nfsidmap[3657]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 12 18:46:37.392287 nfsrahead[3659]: setting /var/lib/kubelet/pods/30a126e0-9163-47ee-8eff-488828a9112f/volumes/kubernetes.io~nfs/pvc-302a1419-bd2a-46ea-a4d1-e4c19a91ccde readahead to 128 Dec 12 18:46:37.530662 containerd[1574]: time="2025-12-12T18:46:37.530052652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:30a126e0-9163-47ee-8eff-488828a9112f,Namespace:default,Attempt:0,}" Dec 12 18:46:37.644062 systemd[1]: Started sshd@63-172.237.139.56:22-89.198.32.53:45297.service - OpenSSH per-connection server daemon (89.198.32.53:45297). Dec 12 18:46:37.730871 systemd-networkd[1477]: cali5ec59c6bf6e: Link UP Dec 12 18:46:37.732646 systemd-networkd[1477]: cali5ec59c6bf6e: Gained carrier Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.616 [INFO][3660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {192.168.177.56-k8s-test--pod--1-eth0 default 30a126e0-9163-47ee-8eff-488828a9112f 1847 0 2025-12-12 18:46:24 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 192.168.177.56 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="192.168.177.56-k8s-test--pod--1-" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.616 [INFO][3660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="192.168.177.56-k8s-test--pod--1-eth0" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.683 [INFO][3672] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" HandleID="k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Workload="192.168.177.56-k8s-test--pod--1-eth0" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.684 [INFO][3672] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" HandleID="k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Workload="192.168.177.56-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e0d0), Attrs:map[string]string{"namespace":"default", "node":"192.168.177.56", "pod":"test-pod-1", "timestamp":"2025-12-12 18:46:37.683718218 +0000 UTC"}, Hostname:"192.168.177.56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.684 [INFO][3672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.684 [INFO][3672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.684 [INFO][3672] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '192.168.177.56' Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.693 [INFO][3672] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.698 [INFO][3672] ipam/ipam.go 394: Looking up existing affinities for host host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.702 [INFO][3672] ipam/ipam.go 511: Trying affinity for 192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.706 [INFO][3672] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.709 [INFO][3672] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.128/26 host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.709 [INFO][3672] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.93.128/26 handle="k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.711 [INFO][3672] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61 Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.716 [INFO][3672] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.93.128/26 handle="k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.721 [INFO][3672] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.93.132/26] block=192.168.93.128/26 handle="k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.721 [INFO][3672] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.132/26] handle="k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" host="192.168.177.56" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.721 [INFO][3672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.721 [INFO][3672] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.93.132/26] IPv6=[] ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" HandleID="k8s-pod-network.1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Workload="192.168.177.56-k8s-test--pod--1-eth0" Dec 12 18:46:37.745033 containerd[1574]: 2025-12-12 18:46:37.724 [INFO][3660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="192.168.177.56-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.177.56-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"30a126e0-9163-47ee-8eff-488828a9112f", ResourceVersion:"1847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.177.56", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:46:37.745695 containerd[1574]: 2025-12-12 18:46:37.724 [INFO][3660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.132/32] ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="192.168.177.56-k8s-test--pod--1-eth0" Dec 12 18:46:37.745695 containerd[1574]: 2025-12-12 18:46:37.724 [INFO][3660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="192.168.177.56-k8s-test--pod--1-eth0" Dec 12 18:46:37.745695 containerd[1574]: 2025-12-12 18:46:37.734 [INFO][3660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="192.168.177.56-k8s-test--pod--1-eth0" Dec 12 18:46:37.745695 containerd[1574]: 2025-12-12 18:46:37.736 [INFO][3660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="192.168.177.56-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"192.168.177.56-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"30a126e0-9163-47ee-8eff-488828a9112f", ResourceVersion:"1847", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"192.168.177.56", ContainerID:"1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"2a:a0:3f:db:45:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:46:37.745695 containerd[1574]: 2025-12-12 18:46:37.742 [INFO][3660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="192.168.177.56-k8s-test--pod--1-eth0" Dec 12 18:46:38.014632 kubelet[2072]: E1212 18:46:38.012517 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:38.040810 containerd[1574]: time="2025-12-12T18:46:38.040751373Z" level=info msg="connecting to shim 1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61" address="unix:///run/containerd/s/44172cbfc5df91dd414e9aae988dd7f98912d49984e1a209c2b4e2e538ef4009" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:38.132015 systemd[1]: Started cri-containerd-1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61.scope - libcontainer container 1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61. Dec 12 18:46:38.202236 containerd[1574]: time="2025-12-12T18:46:38.202196829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:30a126e0-9163-47ee-8eff-488828a9112f,Namespace:default,Attempt:0,} returns sandbox id \"1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61\"" Dec 12 18:46:38.203618 containerd[1574]: time="2025-12-12T18:46:38.203588199Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 12 18:46:38.412974 containerd[1574]: time="2025-12-12T18:46:38.412924208Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:38.413899 containerd[1574]: time="2025-12-12T18:46:38.413857568Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 12 18:46:38.416772 containerd[1574]: time="2025-12-12T18:46:38.416727288Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:3db8be616067ff6bd4534d63c0a1427862e285068488ddccf319982871e49aac\", size \"73312214\" in 213.103599ms" Dec 12 18:46:38.416772 containerd[1574]: time="2025-12-12T18:46:38.416769918Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\"" Dec 12 18:46:38.420693 containerd[1574]: time="2025-12-12T18:46:38.420660228Z" level=info msg="CreateContainer within sandbox \"1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 12 18:46:38.430858 containerd[1574]: time="2025-12-12T18:46:38.428603079Z" level=info msg="Container 594885f7e5ac8580411b39601918da72ba3401ef331c95251debaabb2ccced27: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:38.439436 containerd[1574]: time="2025-12-12T18:46:38.439213199Z" level=info msg="CreateContainer within sandbox \"1c6315d165a99e5859cb1789d395882ce99890b35ab1e16aa4b4297a0faf2f61\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"594885f7e5ac8580411b39601918da72ba3401ef331c95251debaabb2ccced27\"" Dec 12 18:46:38.440331 containerd[1574]: time="2025-12-12T18:46:38.440308039Z" level=info msg="StartContainer for \"594885f7e5ac8580411b39601918da72ba3401ef331c95251debaabb2ccced27\"" Dec 12 18:46:38.441341 containerd[1574]: time="2025-12-12T18:46:38.441306389Z" level=info msg="connecting to shim 594885f7e5ac8580411b39601918da72ba3401ef331c95251debaabb2ccced27" address="unix:///run/containerd/s/44172cbfc5df91dd414e9aae988dd7f98912d49984e1a209c2b4e2e538ef4009" protocol=ttrpc version=3 Dec 12 18:46:38.473968 systemd[1]: Started cri-containerd-594885f7e5ac8580411b39601918da72ba3401ef331c95251debaabb2ccced27.scope - libcontainer container 594885f7e5ac8580411b39601918da72ba3401ef331c95251debaabb2ccced27. Dec 12 18:46:38.481089 sshd[3677]: Connection closed by 89.198.32.53 port 45297 [preauth] Dec 12 18:46:38.483586 systemd[1]: sshd@63-172.237.139.56:22-89.198.32.53:45297.service: Deactivated successfully. Dec 12 18:46:38.525596 containerd[1574]: time="2025-12-12T18:46:38.525553622Z" level=info msg="StartContainer for \"594885f7e5ac8580411b39601918da72ba3401ef331c95251debaabb2ccced27\" returns successfully" Dec 12 18:46:38.666260 systemd[1]: Started sshd@64-172.237.139.56:22-37.98.30.49:18191.service - OpenSSH per-connection server daemon (37.98.30.49:18191). Dec 12 18:46:38.865110 systemd[1]: Started sshd@65-172.237.139.56:22-5.215.36.195:41818.service - OpenSSH per-connection server daemon (5.215.36.195:41818). Dec 12 18:46:39.017041 kubelet[2072]: E1212 18:46:39.016902 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:39.106892 systemd[1]: Started sshd@66-172.237.139.56:22-2.147.223.135:39725.service - OpenSSH per-connection server daemon (2.147.223.135:39725). Dec 12 18:46:39.285034 systemd[1]: Started sshd@67-172.237.139.56:22-93.110.253.13:42585.service - OpenSSH per-connection server daemon (93.110.253.13:42585). Dec 12 18:46:39.592992 systemd-networkd[1477]: cali5ec59c6bf6e: Gained IPv6LL Dec 12 18:46:39.659282 sshd[3792]: Connection closed by 37.98.30.49 port 18191 [preauth] Dec 12 18:46:39.660398 systemd[1]: sshd@64-172.237.139.56:22-37.98.30.49:18191.service: Deactivated successfully. Dec 12 18:46:39.677348 kubelet[2072]: E1212 18:46:39.677319 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Dec 12 18:46:39.691020 kubelet[2072]: I1212 18:46:39.690961 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.476604958 podStartE2EDuration="15.690949447s" podCreationTimestamp="2025-12-12 18:46:24 +0000 UTC" firstStartedPulling="2025-12-12 18:46:38.203341649 +0000 UTC m=+66.839636114" lastFinishedPulling="2025-12-12 18:46:38.417686138 +0000 UTC m=+67.053980603" observedRunningTime="2025-12-12 18:46:39.079146804 +0000 UTC m=+67.715441269" watchObservedRunningTime="2025-12-12 18:46:39.690949447 +0000 UTC m=+68.327243912" Dec 12 18:46:39.766085 systemd[1]: Started sshd@68-172.237.139.56:22-91.133.156.115:53272.service - OpenSSH per-connection server daemon (91.133.156.115:53272). Dec 12 18:46:39.917979 sshd[3800]: Connection closed by 2.147.223.135 port 39725 [preauth] Dec 12 18:46:39.919949 systemd[1]: sshd@66-172.237.139.56:22-2.147.223.135:39725.service: Deactivated successfully. Dec 12 18:46:39.948141 sshd[3796]: Connection closed by 5.215.36.195 port 41818 [preauth] Dec 12 18:46:39.950126 systemd[1]: sshd@65-172.237.139.56:22-5.215.36.195:41818.service: Deactivated successfully. Dec 12 18:46:40.017088 kubelet[2072]: E1212 18:46:40.017027 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:40.241334 systemd[1]: Started sshd@69-172.237.139.56:22-5.22.108.63:43907.service - OpenSSH per-connection server daemon (5.22.108.63:43907). Dec 12 18:46:40.373545 sshd[3804]: Connection closed by 93.110.253.13 port 42585 [preauth] Dec 12 18:46:40.376234 systemd[1]: sshd@67-172.237.139.56:22-93.110.253.13:42585.service: Deactivated successfully. Dec 12 18:46:40.523926 sshd[3835]: Connection closed by 91.133.156.115 port 53272 [preauth] Dec 12 18:46:40.526053 systemd[1]: sshd@68-172.237.139.56:22-91.133.156.115:53272.service: Deactivated successfully. Dec 12 18:46:41.017512 kubelet[2072]: E1212 18:46:41.017384 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:42.017701 kubelet[2072]: E1212 18:46:42.017635 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:42.365275 systemd[1]: Started sshd@70-172.237.139.56:22-2.145.108.228:22228.service - OpenSSH per-connection server daemon (2.145.108.228:22228). Dec 12 18:46:43.018642 kubelet[2072]: E1212 18:46:43.018589 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:43.354698 systemd[1]: Started sshd@71-172.237.139.56:22-5.75.200.128:36824.service - OpenSSH per-connection server daemon (5.75.200.128:36824). Dec 12 18:46:43.368688 kubelet[2072]: E1212 18:46:43.368502 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-794fx" podUID="9d771747-d366-4e3a-b362-45818ffae2f6" Dec 12 18:46:44.019537 kubelet[2072]: E1212 18:46:44.019476 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:44.200107 sshd[3843]: Connection closed by 5.22.108.63 port 43907 [preauth] Dec 12 18:46:44.202472 systemd[1]: sshd@69-172.237.139.56:22-5.22.108.63:43907.service: Deactivated successfully. Dec 12 18:46:45.019776 kubelet[2072]: E1212 18:46:45.019706 2072 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:46:45.569916 sshd[3540]: Connection closed by 172.80.252.141 port 28428 [preauth] Dec 12 18:46:45.573257 systemd[1]: sshd@59-172.237.139.56:22-172.80.252.141:28428.service: Deactivated successfully.