Feb 13 20:28:03.035774 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:28:03.035815 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:28:03.035831 kernel: BIOS-provided physical RAM map: Feb 13 20:28:03.035842 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:28:03.035852 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:28:03.035863 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:28:03.035879 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 20:28:03.035891 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 20:28:03.035902 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 20:28:03.035913 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:28:03.035924 kernel: NX (Execute Disable) protection: active Feb 13 20:28:03.035936 kernel: APIC: Static calls initialized Feb 13 20:28:03.035947 kernel: SMBIOS 2.7 present. Feb 13 20:28:03.035959 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 20:28:03.035976 kernel: Hypervisor detected: KVM Feb 13 20:28:03.044441 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:28:03.044483 kernel: kvm-clock: using sched offset of 7033470617 cycles Feb 13 20:28:03.044499 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:28:03.044515 kernel: tsc: Detected 2499.996 MHz processor Feb 13 20:28:03.044530 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:28:03.044545 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:28:03.044568 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 20:28:03.044582 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:28:03.044597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:28:03.044612 kernel: Using GB pages for direct mapping Feb 13 20:28:03.044626 kernel: ACPI: Early table checksum verification disabled Feb 13 20:28:03.044640 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 20:28:03.044656 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 20:28:03.044670 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 20:28:03.044684 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 20:28:03.044700 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 20:28:03.044713 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 20:28:03.044727 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 20:28:03.044741 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 20:28:03.044755 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 20:28:03.044769 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 20:28:03.044784 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 20:28:03.044797 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 20:28:03.044809 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 20:28:03.044827 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 20:28:03.044846 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 20:28:03.044860 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 20:28:03.044875 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 20:28:03.044889 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 20:28:03.044907 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 20:28:03.044922 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 20:28:03.044936 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 20:28:03.044950 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 20:28:03.044965 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:28:03.044979 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:28:03.046615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 20:28:03.046645 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 20:28:03.046660 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 20:28:03.046683 kernel: Zone ranges: Feb 13 20:28:03.046697 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:28:03.046859 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 20:28:03.046876 kernel: Normal empty Feb 13 20:28:03.046891 kernel: Movable zone start for each node Feb 13 20:28:03.046907 kernel: Early memory node ranges Feb 13 20:28:03.046922 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:28:03.046938 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 20:28:03.046953 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 20:28:03.046974 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:28:03.048018 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:28:03.048058 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 20:28:03.048075 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:28:03.048091 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:28:03.048164 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 20:28:03.048182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:28:03.048197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:28:03.048213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:28:03.048228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:28:03.048250 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:28:03.048265 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:28:03.048280 kernel: TSC deadline timer available Feb 13 20:28:03.048296 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:28:03.048312 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:28:03.048327 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 20:28:03.048343 kernel: Booting paravirtualized kernel on KVM Feb 13 20:28:03.048359 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:28:03.048375 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:28:03.048437 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:28:03.048454 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:28:03.048469 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:28:03.048485 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:28:03.048501 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:28:03.048518 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:28:03.048535 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:28:03.048550 kernel: random: crng init done Feb 13 20:28:03.048569 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:28:03.048584 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:28:03.048600 kernel: Fallback order for Node 0: 0 Feb 13 20:28:03.048615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 20:28:03.048630 kernel: Policy zone: DMA32 Feb 13 20:28:03.048645 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:28:03.048661 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125152K reserved, 0K cma-reserved) Feb 13 20:28:03.048677 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:28:03.048840 kernel: Kernel/User page tables isolation: enabled Feb 13 20:28:03.048859 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:28:03.048874 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:28:03.048890 kernel: Dynamic Preempt: voluntary Feb 13 20:28:03.048905 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:28:03.048928 kernel: rcu: RCU event tracing is enabled. Feb 13 20:28:03.048943 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:28:03.048959 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:28:03.048975 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:28:03.050222 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:28:03.050256 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:28:03.050272 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:28:03.050288 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:28:03.050304 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:28:03.050319 kernel: Console: colour VGA+ 80x25 Feb 13 20:28:03.050335 kernel: printk: console [ttyS0] enabled Feb 13 20:28:03.050351 kernel: ACPI: Core revision 20230628 Feb 13 20:28:03.050367 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 20:28:03.050525 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:28:03.050548 kernel: x2apic enabled Feb 13 20:28:03.050565 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:28:03.050592 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 20:28:03.050612 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 20:28:03.050629 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:28:03.050646 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:28:03.050662 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:28:03.050678 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:28:03.050694 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:28:03.050710 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:28:03.050727 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 20:28:03.050841 kernel: RETBleed: Vulnerable Feb 13 20:28:03.050863 kernel: Speculative Store Bypass: Vulnerable Feb 13 20:28:03.050880 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:28:03.050895 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:28:03.054101 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 20:28:03.054124 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:28:03.054141 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:28:03.054165 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:28:03.054182 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 20:28:03.054198 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 20:28:03.054214 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 20:28:03.054230 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 20:28:03.054246 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 20:28:03.054262 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 20:28:03.054279 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:28:03.054295 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 20:28:03.054312 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 20:28:03.054328 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 20:28:03.054347 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 20:28:03.054364 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 20:28:03.054380 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 20:28:03.054396 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 20:28:03.054412 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:28:03.054428 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:28:03.054444 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:28:03.054460 kernel: landlock: Up and running. Feb 13 20:28:03.054476 kernel: SELinux: Initializing. Feb 13 20:28:03.054493 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:28:03.054510 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:28:03.054526 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 20:28:03.054546 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:28:03.054563 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:28:03.054579 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:28:03.054596 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 20:28:03.054613 kernel: signal: max sigframe size: 3632 Feb 13 20:28:03.054628 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:28:03.054646 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:28:03.054663 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:28:03.054680 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:28:03.054699 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:28:03.054715 kernel: .... node #0, CPUs: #1 Feb 13 20:28:03.054733 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:28:03.054834 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:28:03.054853 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:28:03.054869 kernel: smpboot: Max logical packages: 1 Feb 13 20:28:03.054886 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 20:28:03.054901 kernel: devtmpfs: initialized Feb 13 20:28:03.054921 kernel: x86/mm: Memory block size: 128MB Feb 13 20:28:03.054938 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:28:03.054955 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:28:03.054971 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:28:03.055000 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:28:03.055016 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:28:03.055032 kernel: audit: type=2000 audit(1739478480.656:1): state=initialized audit_enabled=0 res=1 Feb 13 20:28:03.055048 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:28:03.055065 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:28:03.055085 kernel: cpuidle: using governor menu Feb 13 20:28:03.055100 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:28:03.055117 kernel: dca service started, version 1.12.1 Feb 13 20:28:03.055133 kernel: PCI: Using configuration type 1 for base access Feb 13 20:28:03.055150 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:28:03.055166 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:28:03.055182 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:28:03.055199 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:28:03.055215 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:28:03.055235 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:28:03.055250 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:28:03.055267 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:28:03.055282 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:28:03.055299 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:28:03.055314 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:28:03.055330 kernel: ACPI: Interpreter enabled Feb 13 20:28:03.055345 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:28:03.055362 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:28:03.055382 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:28:03.055399 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:28:03.055415 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:28:03.055432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:28:03.055801 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:28:03.060790 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:28:03.064338 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:28:03.064390 kernel: acpiphp: Slot [3] registered Feb 13 20:28:03.064424 kernel: acpiphp: Slot [4] registered Feb 13 20:28:03.064445 kernel: acpiphp: Slot [5] registered Feb 13 20:28:03.064466 kernel: acpiphp: Slot [6] registered Feb 13 20:28:03.064487 kernel: acpiphp: Slot [7] registered Feb 13 20:28:03.064508 kernel: acpiphp: Slot [8] registered Feb 13 20:28:03.064527 kernel: acpiphp: Slot [9] registered Feb 13 20:28:03.064541 kernel: acpiphp: Slot [10] registered Feb 13 20:28:03.064557 kernel: acpiphp: Slot [11] registered Feb 13 20:28:03.064573 kernel: acpiphp: Slot [12] registered Feb 13 20:28:03.064593 kernel: acpiphp: Slot [13] registered Feb 13 20:28:03.064609 kernel: acpiphp: Slot [14] registered Feb 13 20:28:03.064625 kernel: acpiphp: Slot [15] registered Feb 13 20:28:03.064640 kernel: acpiphp: Slot [16] registered Feb 13 20:28:03.064656 kernel: acpiphp: Slot [17] registered Feb 13 20:28:03.064672 kernel: acpiphp: Slot [18] registered Feb 13 20:28:03.064688 kernel: acpiphp: Slot [19] registered Feb 13 20:28:03.064704 kernel: acpiphp: Slot [20] registered Feb 13 20:28:03.064720 kernel: acpiphp: Slot [21] registered Feb 13 20:28:03.064739 kernel: acpiphp: Slot [22] registered Feb 13 20:28:03.064755 kernel: acpiphp: Slot [23] registered Feb 13 20:28:03.064771 kernel: acpiphp: Slot [24] registered Feb 13 20:28:03.064787 kernel: acpiphp: Slot [25] registered Feb 13 20:28:03.064803 kernel: acpiphp: Slot [26] registered Feb 13 20:28:03.064819 kernel: acpiphp: Slot [27] registered Feb 13 20:28:03.064835 kernel: acpiphp: Slot [28] registered Feb 13 20:28:03.064851 kernel: acpiphp: Slot [29] registered Feb 13 20:28:03.064867 kernel: acpiphp: Slot [30] registered Feb 13 20:28:03.064883 kernel: acpiphp: Slot [31] registered Feb 13 20:28:03.064902 kernel: PCI host bridge to bus 0000:00 Feb 13 20:28:03.065113 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:28:03.065244 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:28:03.065369 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:28:03.065490 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:28:03.065626 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:28:03.069504 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:28:03.069726 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:28:03.069887 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 20:28:03.071139 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:28:03.071311 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 20:28:03.071452 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 20:28:03.071589 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 20:28:03.072376 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 20:28:03.072597 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 20:28:03.072831 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 20:28:03.072979 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 20:28:03.075214 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 20:28:03.075583 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 20:28:03.075739 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:28:03.075875 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:28:03.076676 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 20:28:03.076836 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 20:28:03.076977 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 20:28:03.077144 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 20:28:03.077166 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:28:03.077183 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:28:03.077252 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:28:03.077270 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:28:03.077287 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:28:03.077303 kernel: iommu: Default domain type: Translated Feb 13 20:28:03.077320 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:28:03.077337 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:28:03.077353 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:28:03.077369 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:28:03.077385 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 20:28:03.077528 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 20:28:03.077653 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 20:28:03.077776 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:28:03.077795 kernel: vgaarb: loaded Feb 13 20:28:03.077811 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 20:28:03.077827 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 20:28:03.077842 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:28:03.077857 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:28:03.077885 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:28:03.077900 kernel: pnp: PnP ACPI init Feb 13 20:28:03.077915 kernel: pnp: PnP ACPI: found 5 devices Feb 13 20:28:03.077930 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:28:03.077945 kernel: NET: Registered PF_INET protocol family Feb 13 20:28:03.077960 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:28:03.077975 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:28:03.088435 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:28:03.088471 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:28:03.088500 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:28:03.088517 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:28:03.088534 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:28:03.088550 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:28:03.088567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:28:03.088584 kernel: NET: Registered PF_XDP protocol family Feb 13 20:28:03.088796 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:28:03.088920 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:28:03.089214 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:28:03.094426 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:28:03.094691 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:28:03.094715 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:28:03.094733 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:28:03.100501 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 20:28:03.100536 kernel: clocksource: Switched to clocksource tsc Feb 13 20:28:03.100554 kernel: Initialise system trusted keyrings Feb 13 20:28:03.100572 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:28:03.100599 kernel: Key type asymmetric registered Feb 13 20:28:03.100615 kernel: Asymmetric key parser 'x509' registered Feb 13 20:28:03.100632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:28:03.100649 kernel: io scheduler mq-deadline registered Feb 13 20:28:03.100665 kernel: io scheduler kyber registered Feb 13 20:28:03.108813 kernel: io scheduler bfq registered Feb 13 20:28:03.108846 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:28:03.108864 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:28:03.108882 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:28:03.108910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:28:03.108927 kernel: i8042: Warning: Keylock active Feb 13 20:28:03.108943 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:28:03.108960 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:28:03.109207 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:28:03.109339 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:28:03.109624 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:28:01 UTC (1739478481) Feb 13 20:28:03.133238 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:28:03.133298 kernel: intel_pstate: CPU model not supported Feb 13 20:28:03.133316 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:28:03.133332 kernel: Segment Routing with IPv6 Feb 13 20:28:03.133347 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:28:03.133364 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:28:03.133379 kernel: Key type dns_resolver registered Feb 13 20:28:03.133394 kernel: IPI shorthand broadcast: enabled Feb 13 20:28:03.133410 kernel: sched_clock: Marking stable (1244095716, 321561235)->(1753998749, -188341798) Feb 13 20:28:03.133426 kernel: registered taskstats version 1 Feb 13 20:28:03.133446 kernel: Loading compiled-in X.509 certificates Feb 13 20:28:03.133461 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:28:03.133476 kernel: Key type .fscrypt registered Feb 13 20:28:03.133492 kernel: Key type fscrypt-provisioning registered Feb 13 20:28:03.133508 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:28:03.133523 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:28:03.133539 kernel: ima: No architecture policies found Feb 13 20:28:03.133554 kernel: clk: Disabling unused clocks Feb 13 20:28:03.133573 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:28:03.133588 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:28:03.133604 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:28:03.133619 kernel: Run /init as init process Feb 13 20:28:03.133634 kernel: with arguments: Feb 13 20:28:03.133649 kernel: /init Feb 13 20:28:03.133664 kernel: with environment: Feb 13 20:28:03.133679 kernel: HOME=/ Feb 13 20:28:03.133693 kernel: TERM=linux Feb 13 20:28:03.133705 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:28:03.133730 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:28:03.133761 systemd[1]: Detected virtualization amazon. Feb 13 20:28:03.133779 systemd[1]: Detected architecture x86-64. Feb 13 20:28:03.133793 systemd[1]: Running in initrd. Feb 13 20:28:03.133811 systemd[1]: No hostname configured, using default hostname. Feb 13 20:28:03.133826 systemd[1]: Hostname set to . Feb 13 20:28:03.133844 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:28:03.133861 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:28:03.133891 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:28:03.133910 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:28:03.133927 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:28:03.133945 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:28:03.133961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:28:03.133978 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:28:03.134027 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:28:03.134047 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:28:03.134077 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:28:03.134095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:28:03.134111 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:28:03.134127 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:28:03.134142 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:28:03.134161 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:28:03.134175 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:28:03.134191 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:28:03.134210 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:28:03.134225 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:28:03.134240 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:28:03.134265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:28:03.134279 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:28:03.134293 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:28:03.134312 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:28:03.134325 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:28:03.134340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:28:03.134356 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:28:03.134373 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:28:03.134397 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:28:03.134413 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:28:03.134428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:28:03.134444 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:28:03.134459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:28:03.134475 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:28:03.134538 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 20:28:03.134575 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:28:03.134591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:28:03.134611 kernel: Bridge firewalling registered Feb 13 20:28:03.134629 systemd-journald[179]: Journal started Feb 13 20:28:03.134665 systemd-journald[179]: Runtime Journal (/run/log/journal/ec22c41e015fc690fa24a23d8393f65a) is 4.8M, max 38.6M, 33.7M free. Feb 13 20:28:03.142970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:28:03.041606 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 20:28:03.134286 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 20:28:03.355185 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:28:03.363152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:28:03.394422 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:28:03.402448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:28:03.407219 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:28:03.411221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:28:03.428747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:28:03.459864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:28:03.464439 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:28:03.503226 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:28:03.512645 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:28:03.535015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:28:03.577951 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:28:03.677339 dracut-cmdline[209]: dracut-dracut-053 Feb 13 20:28:03.687954 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:28:03.752391 systemd-resolved[212]: Positive Trust Anchors: Feb 13 20:28:03.752781 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:28:03.752846 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:28:03.759026 systemd-resolved[212]: Defaulting to hostname 'linux'. Feb 13 20:28:03.760673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:28:03.777440 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:28:03.891022 kernel: SCSI subsystem initialized Feb 13 20:28:03.905027 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:28:03.935213 kernel: iscsi: registered transport (tcp) Feb 13 20:28:03.987712 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:28:03.987805 kernel: QLogic iSCSI HBA Driver Feb 13 20:28:04.074048 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:28:04.080410 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:28:04.128033 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:28:04.128389 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:28:04.135060 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:28:04.245184 kernel: raid6: avx512x4 gen() 2756 MB/s Feb 13 20:28:04.263050 kernel: raid6: avx512x2 gen() 3027 MB/s Feb 13 20:28:04.282780 kernel: raid6: avx512x1 gen() 3314 MB/s Feb 13 20:28:04.303029 kernel: raid6: avx2x4 gen() 9534 MB/s Feb 13 20:28:04.324229 kernel: raid6: avx2x2 gen() 5157 MB/s Feb 13 20:28:04.341042 kernel: raid6: avx2x1 gen() 4007 MB/s Feb 13 20:28:04.341130 kernel: raid6: using algorithm avx2x4 gen() 9534 MB/s Feb 13 20:28:04.360015 kernel: raid6: .... xor() 1820 MB/s, rmw enabled Feb 13 20:28:04.360096 kernel: raid6: using avx512x2 recovery algorithm Feb 13 20:28:04.416123 kernel: xor: automatically using best checksumming function avx Feb 13 20:28:04.833041 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:28:04.858350 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:28:04.870495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:28:04.924400 systemd-udevd[395]: Using default interface naming scheme 'v255'. Feb 13 20:28:04.935130 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:28:04.964207 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:28:05.013699 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Feb 13 20:28:05.200247 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:28:05.220266 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:28:05.420676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:28:05.439512 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:28:05.537808 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:28:05.546398 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:28:05.559536 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:28:05.585279 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:28:05.660274 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:28:05.710355 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:28:05.727014 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:28:05.740236 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 20:28:05.770530 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 20:28:05.770823 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 20:28:05.771178 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:28:05.771269 kernel: AES CTR mode by8 optimization enabled Feb 13 20:28:05.771292 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:a7:3a:11:2a:63 Feb 13 20:28:05.774440 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:28:05.802350 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 20:28:05.802608 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:28:05.804757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:28:05.804904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:28:05.810716 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:28:05.819366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:28:05.866913 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 20:28:05.819576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:28:05.829357 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:28:05.878608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:28:05.902704 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:28:05.902773 kernel: GPT:9289727 != 16777215 Feb 13 20:28:05.902803 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:28:05.903450 kernel: GPT:9289727 != 16777215 Feb 13 20:28:05.905635 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:28:05.915018 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:28:06.081018 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (448) Feb 13 20:28:06.142041 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (461) Feb 13 20:28:06.261982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:28:06.278267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:28:06.300045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 20:28:06.320253 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 20:28:06.323253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:28:06.340590 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 20:28:06.362590 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 20:28:06.362741 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 20:28:06.376427 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:28:06.392358 disk-uuid[629]: Primary Header is updated. Feb 13 20:28:06.392358 disk-uuid[629]: Secondary Entries is updated. Feb 13 20:28:06.392358 disk-uuid[629]: Secondary Header is updated. Feb 13 20:28:06.398743 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:28:06.402154 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:28:06.409019 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:28:07.420079 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:28:07.420416 disk-uuid[630]: The operation has completed successfully. Feb 13 20:28:07.658494 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:28:07.658594 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:28:07.696250 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:28:07.719681 sh[971]: Success Feb 13 20:28:07.750018 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:28:07.896124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:28:07.914115 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:28:07.931433 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:28:07.971095 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:28:07.971173 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:28:07.971207 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:28:07.973140 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:28:07.975338 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:28:08.085021 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:28:08.087507 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:28:08.089325 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:28:08.099565 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:28:08.114222 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:28:08.167470 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:28:08.167636 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:28:08.167701 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:28:08.178036 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:28:08.207769 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:28:08.211131 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:28:08.222825 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:28:08.233402 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:28:08.282166 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:28:08.290329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:28:08.321423 systemd-networkd[1163]: lo: Link UP Feb 13 20:28:08.321436 systemd-networkd[1163]: lo: Gained carrier Feb 13 20:28:08.323243 systemd-networkd[1163]: Enumeration completed Feb 13 20:28:08.323360 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:28:08.323866 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:28:08.323871 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:28:08.325446 systemd[1]: Reached target network.target - Network. Feb 13 20:28:08.329027 systemd-networkd[1163]: eth0: Link UP Feb 13 20:28:08.329033 systemd-networkd[1163]: eth0: Gained carrier Feb 13 20:28:08.329047 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:28:08.352089 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.17.255/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 20:28:08.755196 ignition[1105]: Ignition 2.19.0 Feb 13 20:28:08.755211 ignition[1105]: Stage: fetch-offline Feb 13 20:28:08.755495 ignition[1105]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:28:08.755507 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:28:08.758636 ignition[1105]: Ignition finished successfully Feb 13 20:28:08.769255 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:28:08.786366 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:28:08.821242 ignition[1172]: Ignition 2.19.0 Feb 13 20:28:08.821257 ignition[1172]: Stage: fetch Feb 13 20:28:08.821738 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:28:08.821753 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:28:08.821893 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:28:08.833789 ignition[1172]: PUT result: OK Feb 13 20:28:08.836413 ignition[1172]: parsed url from cmdline: "" Feb 13 20:28:08.836425 ignition[1172]: no config URL provided Feb 13 20:28:08.836434 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:28:08.836449 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:28:08.836481 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:28:08.837483 ignition[1172]: PUT result: OK Feb 13 20:28:08.837536 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 20:28:08.839862 ignition[1172]: GET result: OK Feb 13 20:28:08.839915 ignition[1172]: parsing config with SHA512: 1e96fbe9664c70b2567ab21b8439d81c6690cce7ed09bdcb00aff21060b7346b99819ed212c4f88506d30ce875984f37b463c9fb9068a742b88eaea50219297f Feb 13 20:28:08.845636 unknown[1172]: fetched base config from "system" Feb 13 20:28:08.845652 unknown[1172]: fetched base config from "system" Feb 13 20:28:08.846162 ignition[1172]: fetch: fetch complete Feb 13 20:28:08.845661 unknown[1172]: fetched user config from "aws" Feb 13 20:28:08.846170 ignition[1172]: fetch: fetch passed Feb 13 20:28:08.848526 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:28:08.846225 ignition[1172]: Ignition finished successfully Feb 13 20:28:08.858276 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:28:08.892801 ignition[1178]: Ignition 2.19.0 Feb 13 20:28:08.892831 ignition[1178]: Stage: kargs Feb 13 20:28:08.893374 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:28:08.893387 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:28:08.893497 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:28:08.900582 ignition[1178]: PUT result: OK Feb 13 20:28:08.912212 ignition[1178]: kargs: kargs passed Feb 13 20:28:08.912420 ignition[1178]: Ignition finished successfully Feb 13 20:28:08.917305 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:28:08.925951 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:28:08.953259 ignition[1184]: Ignition 2.19.0 Feb 13 20:28:08.953273 ignition[1184]: Stage: disks Feb 13 20:28:08.953827 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:28:08.953841 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:28:08.954024 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:28:08.955210 ignition[1184]: PUT result: OK Feb 13 20:28:08.961585 ignition[1184]: disks: disks passed Feb 13 20:28:08.961670 ignition[1184]: Ignition finished successfully Feb 13 20:28:08.965605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:28:08.969126 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:28:08.972921 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:28:08.973062 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:28:08.977025 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:28:08.979819 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:28:08.988314 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:28:09.048202 systemd-fsck[1192]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:28:09.051605 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:28:09.059553 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:28:09.225014 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:28:09.225767 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:28:09.226834 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:28:09.250108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:28:09.271142 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:28:09.275951 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:28:09.279270 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:28:09.279327 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:28:09.294564 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:28:09.304389 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:28:09.310047 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1211) Feb 13 20:28:09.313008 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:28:09.313071 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:28:09.313094 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:28:09.325023 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:28:09.326942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:28:09.795544 initrd-setup-root[1235]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:28:09.827032 initrd-setup-root[1242]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:28:09.850535 initrd-setup-root[1249]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:28:09.877604 initrd-setup-root[1256]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:28:10.121277 systemd-networkd[1163]: eth0: Gained IPv6LL Feb 13 20:28:10.318258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:28:10.327165 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:28:10.341763 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:28:10.361508 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:28:10.363184 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:28:10.404908 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:28:10.411978 ignition[1323]: INFO : Ignition 2.19.0 Feb 13 20:28:10.411978 ignition[1323]: INFO : Stage: mount Feb 13 20:28:10.421503 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:28:10.421503 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:28:10.421503 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:28:10.421503 ignition[1323]: INFO : PUT result: OK Feb 13 20:28:10.429772 ignition[1323]: INFO : mount: mount passed Feb 13 20:28:10.431491 ignition[1323]: INFO : Ignition finished successfully Feb 13 20:28:10.443471 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:28:10.461321 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:28:10.479290 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:28:10.512016 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1335) Feb 13 20:28:10.514497 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:28:10.514564 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:28:10.514585 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:28:10.520017 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:28:10.523025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:28:10.556499 ignition[1352]: INFO : Ignition 2.19.0 Feb 13 20:28:10.556499 ignition[1352]: INFO : Stage: files Feb 13 20:28:10.559054 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:28:10.559054 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:28:10.559054 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:28:10.564705 ignition[1352]: INFO : PUT result: OK Feb 13 20:28:10.568911 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:28:10.570463 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:28:10.570463 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:28:10.590879 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:28:10.593201 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:28:10.596092 unknown[1352]: wrote ssh authorized keys file for user: core Feb 13 20:28:10.600438 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:28:10.600438 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:28:10.600438 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:28:10.600438 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:28:10.611392 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:28:10.611392 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:28:10.611392 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:28:10.611392 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:28:10.611392 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 20:28:11.015155 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 20:28:11.653232 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:28:11.657298 ignition[1352]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:28:11.657298 ignition[1352]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:28:11.657298 ignition[1352]: INFO : files: files passed Feb 13 20:28:11.657298 ignition[1352]: INFO : Ignition finished successfully Feb 13 20:28:11.680884 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:28:11.689293 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:28:11.704216 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:28:11.709607 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:28:11.709745 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:28:11.747490 initrd-setup-root-after-ignition[1380]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:28:11.747490 initrd-setup-root-after-ignition[1380]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:28:11.758462 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:28:11.773928 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:28:11.786488 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:28:11.807301 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:28:11.901214 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:28:11.905646 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:28:11.915698 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:28:11.927443 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:28:11.932977 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:28:11.940390 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:28:11.985312 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:28:11.996257 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:28:12.036155 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:28:12.040196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:28:12.046915 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:28:12.052279 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:28:12.052525 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:28:12.067164 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:28:12.075904 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:28:12.080645 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:28:12.094175 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:28:12.111949 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:28:12.112204 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:28:12.129977 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:28:12.141752 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:28:12.144975 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:28:12.147627 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:28:12.148968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:28:12.149175 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:28:12.159448 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:28:12.161131 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:28:12.164313 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:28:12.167715 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:28:12.169444 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:28:12.169654 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:28:12.175778 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:28:12.176044 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:28:12.183220 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:28:12.183381 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:28:12.207419 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:28:12.233339 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:28:12.234716 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:28:12.235781 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:28:12.238424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:28:12.238702 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:28:12.254511 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:28:12.286968 ignition[1404]: INFO : Ignition 2.19.0 Feb 13 20:28:12.286968 ignition[1404]: INFO : Stage: umount Feb 13 20:28:12.286968 ignition[1404]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:28:12.286968 ignition[1404]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:28:12.286968 ignition[1404]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:28:12.286968 ignition[1404]: INFO : PUT result: OK Feb 13 20:28:12.254634 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:28:12.290552 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:28:12.322514 ignition[1404]: INFO : umount: umount passed Feb 13 20:28:12.325518 ignition[1404]: INFO : Ignition finished successfully Feb 13 20:28:12.334631 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:28:12.338027 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:28:12.362671 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:28:12.362817 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:28:12.369256 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:28:12.369345 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:28:12.374556 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:28:12.374635 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:28:12.377308 systemd[1]: Stopped target network.target - Network. Feb 13 20:28:12.379897 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:28:12.380136 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:28:12.383048 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:28:12.384815 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:28:12.399943 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:28:12.419223 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:28:12.444780 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:28:12.452641 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:28:12.452707 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:28:12.455913 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:28:12.455998 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:28:12.460668 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:28:12.460763 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:28:12.465435 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:28:12.465523 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:28:12.468197 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:28:12.474185 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:28:12.479728 systemd-networkd[1163]: eth0: DHCPv6 lease lost Feb 13 20:28:12.483476 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:28:12.483615 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:28:12.492580 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:28:12.492743 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:28:12.503636 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:28:12.503830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:28:12.512335 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:28:12.512432 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:28:12.520981 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:28:12.521102 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:28:12.529149 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:28:12.530753 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:28:12.530919 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:28:12.536315 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:28:12.536451 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:28:12.539179 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:28:12.539244 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:28:12.540735 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:28:12.540837 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:28:12.548095 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:28:12.574430 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:28:12.574691 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:28:12.579856 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:28:12.579951 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:28:12.582670 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:28:12.582721 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:28:12.585580 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:28:12.585659 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:28:12.593432 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:28:12.594353 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:28:12.600369 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:28:12.600459 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:28:12.612805 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:28:12.615825 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:28:12.615930 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:28:12.620398 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:28:12.620476 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:28:12.625958 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:28:12.626543 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:28:12.636565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:28:12.636659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:28:12.644654 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:28:12.646521 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:28:12.651395 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:28:12.654398 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:28:12.658689 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:28:12.671509 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:28:12.695428 systemd[1]: Switching root. Feb 13 20:28:12.740378 systemd-journald[179]: Journal stopped Feb 13 20:28:15.459086 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 20:28:15.459275 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:28:15.459301 kernel: SELinux: policy capability open_perms=1 Feb 13 20:28:15.459320 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:28:15.459342 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:28:15.459362 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:28:15.459381 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:28:15.459401 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:28:15.459426 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:28:15.459446 kernel: audit: type=1403 audit(1739478493.189:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:28:15.459470 systemd[1]: Successfully loaded SELinux policy in 84.772ms. Feb 13 20:28:15.459495 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.521ms. Feb 13 20:28:15.459523 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:28:15.459546 systemd[1]: Detected virtualization amazon. Feb 13 20:28:15.459567 systemd[1]: Detected architecture x86-64. Feb 13 20:28:15.459587 systemd[1]: Detected first boot. Feb 13 20:28:15.464165 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:28:15.464471 zram_generator::config[1446]: No configuration found. Feb 13 20:28:15.464507 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:28:15.464531 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:28:15.464553 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:28:15.464576 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:28:15.464606 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:28:15.464627 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:28:15.464653 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:28:15.464675 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:28:15.464698 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:28:15.464720 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:28:15.464743 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:28:15.464765 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:28:15.464791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:28:15.464812 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:28:15.464834 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:28:15.464856 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:28:15.464876 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:28:15.464898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:28:15.464919 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:28:15.464941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:28:15.464964 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:28:15.471027 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:28:15.471754 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:28:15.471797 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:28:15.471821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:28:15.471845 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:28:15.471867 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:28:15.471889 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:28:15.471912 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:28:15.471941 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:28:15.471963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:28:15.471985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:28:15.472022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:28:15.472045 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:28:15.472067 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:28:15.472089 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:28:15.472110 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:28:15.472132 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:28:15.472158 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:28:15.472483 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:28:15.472511 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:28:15.472535 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:28:15.472557 systemd[1]: Reached target machines.target - Containers. Feb 13 20:28:15.472580 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:28:15.472602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:15.472624 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:28:15.472754 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:28:15.472780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:15.472803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:28:15.472824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:15.472846 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:28:15.472868 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:28:15.472892 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:28:15.472914 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:28:15.472936 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:28:15.472962 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:28:15.472985 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:28:15.480115 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:28:15.480145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:28:15.480169 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:28:15.480439 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:28:15.480470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:28:15.480614 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:28:15.480641 systemd[1]: Stopped verity-setup.service. Feb 13 20:28:15.480674 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:28:15.480695 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:28:15.480717 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:28:15.480739 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:28:15.480762 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:28:15.480787 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:28:15.480809 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:28:15.480831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:28:15.480853 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:28:15.480876 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:28:15.480898 kernel: fuse: init (API version 7.39) Feb 13 20:28:15.480921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:15.480945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:15.480970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:15.481131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:15.481153 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:28:15.481173 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:28:15.481196 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:28:15.481217 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:28:15.481236 kernel: loop: module loaded Feb 13 20:28:15.481254 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:28:15.481274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:28:15.481292 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:28:15.481356 systemd-journald[1521]: Collecting audit messages is disabled. Feb 13 20:28:15.481397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:28:15.481416 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:28:15.481438 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:28:15.481457 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:28:15.481476 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:28:15.481552 systemd-journald[1521]: Journal started Feb 13 20:28:15.484466 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec22c41e015fc690fa24a23d8393f65a) is 4.8M, max 38.6M, 33.7M free. Feb 13 20:28:14.796464 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:28:14.895814 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 20:28:14.896292 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:28:15.504170 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:28:15.524017 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:28:15.526560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:15.533278 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:28:15.547553 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:28:15.547738 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:28:15.547770 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:28:15.563023 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:28:15.586163 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:28:15.609032 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:28:15.601720 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:28:15.607439 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:28:15.610371 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:28:15.621101 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:28:15.642211 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:28:15.690042 kernel: ACPI: bus type drm_connector registered Feb 13 20:28:15.704786 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:28:15.711016 kernel: loop0: detected capacity change from 0 to 61336 Feb 13 20:28:15.716210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:28:15.718465 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:28:15.718899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:28:15.722177 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:28:15.728289 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:28:15.740189 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:28:15.779647 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:28:15.819132 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:28:15.841384 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec22c41e015fc690fa24a23d8393f65a is 98.420ms for 951 entries. Feb 13 20:28:15.841384 systemd-journald[1521]: System Journal (/var/log/journal/ec22c41e015fc690fa24a23d8393f65a) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:28:15.980764 systemd-journald[1521]: Received client request to flush runtime journal. Feb 13 20:28:15.980827 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:28:15.980854 kernel: loop1: detected capacity change from 0 to 142488 Feb 13 20:28:15.908439 udevadm[1581]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:28:15.917288 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:28:15.918901 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:28:15.928957 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Feb 13 20:28:15.928981 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Feb 13 20:28:15.937925 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:28:15.984680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:28:15.991446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:28:16.004393 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:28:16.122649 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:28:16.137794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:28:16.162039 kernel: loop2: detected capacity change from 0 to 140768 Feb 13 20:28:16.336115 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Feb 13 20:28:16.336146 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Feb 13 20:28:16.379477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:28:16.406482 kernel: loop3: detected capacity change from 0 to 218376 Feb 13 20:28:16.593252 kernel: loop4: detected capacity change from 0 to 61336 Feb 13 20:28:16.638135 kernel: loop5: detected capacity change from 0 to 142488 Feb 13 20:28:16.690036 kernel: loop6: detected capacity change from 0 to 140768 Feb 13 20:28:16.746032 kernel: loop7: detected capacity change from 0 to 218376 Feb 13 20:28:16.784926 (sd-merge)[1602]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 20:28:16.790266 (sd-merge)[1602]: Merged extensions into '/usr'. Feb 13 20:28:16.798198 systemd[1]: Reloading requested from client PID 1550 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:28:16.798610 systemd[1]: Reloading... Feb 13 20:28:16.984017 zram_generator::config[1628]: No configuration found. Feb 13 20:28:17.286245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:17.590275 systemd[1]: Reloading finished in 790 ms. Feb 13 20:28:17.624504 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:28:17.638358 systemd[1]: Starting ensure-sysext.service... Feb 13 20:28:17.650233 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:28:17.672327 systemd[1]: Reloading requested from client PID 1676 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:28:17.672559 systemd[1]: Reloading... Feb 13 20:28:17.704601 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:28:17.705308 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:28:17.706649 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:28:17.707123 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Feb 13 20:28:17.707215 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Feb 13 20:28:17.717861 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:28:17.717880 systemd-tmpfiles[1677]: Skipping /boot Feb 13 20:28:17.790541 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:28:17.790557 systemd-tmpfiles[1677]: Skipping /boot Feb 13 20:28:17.886035 zram_generator::config[1705]: No configuration found. Feb 13 20:28:18.149423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:18.233638 systemd[1]: Reloading finished in 560 ms. Feb 13 20:28:18.253974 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:28:18.265034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:28:18.290297 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:28:18.301385 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:28:18.315433 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:28:18.339545 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:28:18.365576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:28:18.378199 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:28:18.417006 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:28:18.453671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:28:18.454773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:18.467133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:18.517408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:18.547872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:28:18.549963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:18.550558 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:28:18.567530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:28:18.568431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:18.569682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:18.570055 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:28:18.600240 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:28:18.600619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:18.617873 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:28:18.620137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:18.620342 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:28:18.623741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:28:18.630013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:28:18.762718 systemd[1]: Finished ensure-sysext.service. Feb 13 20:28:18.775388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:18.794201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:18.820770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:18.821048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:18.835257 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:28:18.843426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:28:18.851930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:28:18.852133 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:28:18.858968 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:28:18.859485 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:28:18.879179 systemd-udevd[1761]: Using default interface naming scheme 'v255'. Feb 13 20:28:18.907600 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:28:18.920911 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:28:18.941795 augenrules[1793]: No rules Feb 13 20:28:18.949411 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:28:18.957028 ldconfig[1543]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:28:18.965981 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:28:18.977076 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:28:18.997633 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:28:19.009264 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:28:19.019444 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:28:19.023263 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:28:19.115447 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:28:19.305221 systemd-resolved[1760]: Positive Trust Anchors: Feb 13 20:28:19.310277 systemd-resolved[1760]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:28:19.310340 systemd-resolved[1760]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:28:19.333119 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:28:19.347289 systemd-resolved[1760]: Defaulting to hostname 'linux'. Feb 13 20:28:19.385239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:28:19.389794 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:28:19.416368 systemd-networkd[1807]: lo: Link UP Feb 13 20:28:19.422257 systemd-networkd[1807]: lo: Gained carrier Feb 13 20:28:19.423665 systemd-networkd[1807]: Enumeration completed Feb 13 20:28:19.423984 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:28:19.427574 systemd[1]: Reached target network.target - Network. Feb 13 20:28:19.439337 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:28:19.460674 (udev-worker)[1805]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:28:19.604748 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:28:19.605866 systemd-networkd[1807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:28:19.610671 systemd-networkd[1807]: eth0: Link UP Feb 13 20:28:19.610976 systemd-networkd[1807]: eth0: Gained carrier Feb 13 20:28:19.614927 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:28:19.631320 systemd-networkd[1807]: eth0: DHCPv4 address 172.31.17.255/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 20:28:19.644231 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 20:28:19.669270 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 20:28:19.679016 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1819) Feb 13 20:28:19.694066 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:28:19.701028 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 20:28:19.711041 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 20:28:19.714042 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 20:28:19.930447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:28:19.943084 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:28:20.062059 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 20:28:20.073461 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:28:20.075296 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:28:20.092909 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:28:20.190765 lvm[1921]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:28:20.254942 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:28:20.461802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:28:20.504335 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:28:20.506420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:28:20.510885 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:28:20.515674 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:28:20.518170 lvm[1925]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:28:20.518649 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:28:20.520146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:28:20.522404 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:28:20.525941 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:28:20.528172 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:28:20.530247 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:28:20.530305 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:28:20.531515 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:28:20.534759 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:28:20.539837 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:28:20.559530 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:28:20.568487 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:28:20.572542 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:28:20.576649 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:28:20.580297 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:28:20.583381 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:28:20.583424 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:28:20.593392 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:28:20.596986 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:28:20.600389 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:28:20.609263 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:28:20.614216 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:28:20.618459 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:28:20.629549 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:28:20.649353 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 20:28:20.666626 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 20:28:20.681779 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:28:20.695703 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:28:20.704185 jq[1934]: false Feb 13 20:28:20.714184 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:28:20.722741 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:28:20.723798 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:28:20.728501 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:28:20.741830 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:28:20.748767 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:28:20.750067 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:28:20.898699 update_engine[1943]: I20250213 20:28:20.893146 1943 main.cc:92] Flatcar Update Engine starting Feb 13 20:28:20.920340 ntpd[1937]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:28:20.926799 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:28:20.926799 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:28:20.926799 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: ---------------------------------------------------- Feb 13 20:28:20.926799 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:28:20.926799 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:28:20.926799 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: corporation. Support and training for ntp-4 are Feb 13 20:28:20.926799 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: available at https://www.nwtime.org/support Feb 13 20:28:20.926799 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: ---------------------------------------------------- Feb 13 20:28:20.920373 ntpd[1937]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:28:20.920384 ntpd[1937]: ---------------------------------------------------- Feb 13 20:28:20.937283 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: proto: precision = 0.104 usec (-23) Feb 13 20:28:20.937283 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: basedate set to 2025-02-01 Feb 13 20:28:20.937283 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: gps base set to 2025-02-02 (week 2352) Feb 13 20:28:20.920394 ntpd[1937]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:28:20.920404 ntpd[1937]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:28:20.920415 ntpd[1937]: corporation. Support and training for ntp-4 are Feb 13 20:28:20.920425 ntpd[1937]: available at https://www.nwtime.org/support Feb 13 20:28:20.920437 ntpd[1937]: ---------------------------------------------------- Feb 13 20:28:20.933960 ntpd[1937]: proto: precision = 0.104 usec (-23) Feb 13 20:28:20.950427 jq[1944]: true Feb 13 20:28:20.934327 ntpd[1937]: basedate set to 2025-02-01 Feb 13 20:28:20.934342 ntpd[1937]: gps base set to 2025-02-02 (week 2352) Feb 13 20:28:20.961157 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:28:20.963213 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:28:20.963213 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:28:20.963213 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:28:20.963213 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: Listen normally on 3 eth0 172.31.17.255:123 Feb 13 20:28:20.963213 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: Listen normally on 4 lo [::1]:123 Feb 13 20:28:20.962775 ntpd[1937]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:28:20.961454 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:28:20.963541 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: bind(21) AF_INET6 fe80::4a7:3aff:fe11:2a63%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:28:20.963541 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: unable to create socket on eth0 (5) for fe80::4a7:3aff:fe11:2a63%2#123 Feb 13 20:28:20.963541 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: failed to init interface for address fe80::4a7:3aff:fe11:2a63%2 Feb 13 20:28:20.963541 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: Listening on routing socket on fd #21 for interface updates Feb 13 20:28:20.962831 ntpd[1937]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:28:20.963130 ntpd[1937]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:28:20.963171 ntpd[1937]: Listen normally on 3 eth0 172.31.17.255:123 Feb 13 20:28:20.963211 ntpd[1937]: Listen normally on 4 lo [::1]:123 Feb 13 20:28:20.963258 ntpd[1937]: bind(21) AF_INET6 fe80::4a7:3aff:fe11:2a63%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:28:20.963279 ntpd[1937]: unable to create socket on eth0 (5) for fe80::4a7:3aff:fe11:2a63%2#123 Feb 13 20:28:20.963295 ntpd[1937]: failed to init interface for address fe80::4a7:3aff:fe11:2a63%2 Feb 13 20:28:20.963329 ntpd[1937]: Listening on routing socket on fd #21 for interface updates Feb 13 20:28:20.964520 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:28:20.977091 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:28:20.981832 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:28:20.981832 ntpd[1937]: 13 Feb 20:28:20 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:28:20.964929 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:28:20.977123 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:28:20.983118 extend-filesystems[1935]: Found loop4 Feb 13 20:28:20.983118 extend-filesystems[1935]: Found loop5 Feb 13 20:28:20.983118 extend-filesystems[1935]: Found loop6 Feb 13 20:28:20.983118 extend-filesystems[1935]: Found loop7 Feb 13 20:28:20.990659 extend-filesystems[1935]: Found nvme0n1 Feb 13 20:28:20.990659 extend-filesystems[1935]: Found nvme0n1p1 Feb 13 20:28:20.990659 extend-filesystems[1935]: Found nvme0n1p2 Feb 13 20:28:20.990659 extend-filesystems[1935]: Found nvme0n1p3 Feb 13 20:28:20.990659 extend-filesystems[1935]: Found usr Feb 13 20:28:20.990659 extend-filesystems[1935]: Found nvme0n1p4 Feb 13 20:28:20.990659 extend-filesystems[1935]: Found nvme0n1p6 Feb 13 20:28:20.990659 extend-filesystems[1935]: Found nvme0n1p7 Feb 13 20:28:20.990659 extend-filesystems[1935]: Found nvme0n1p9 Feb 13 20:28:20.990659 extend-filesystems[1935]: Checking size of /dev/nvme0n1p9 Feb 13 20:28:21.014481 dbus-daemon[1933]: [system] SELinux support is enabled Feb 13 20:28:21.015353 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:28:21.023492 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:28:21.023534 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:28:21.031194 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:28:21.031224 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:28:21.061696 jq[1964]: true Feb 13 20:28:21.064569 (ntainerd)[1967]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:28:21.070012 dbus-daemon[1933]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1807 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 20:28:21.098311 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 20:28:21.100456 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 20:28:21.104810 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:28:21.113848 update_engine[1943]: I20250213 20:28:21.109539 1943 update_check_scheduler.cc:74] Next update check in 8m35s Feb 13 20:28:21.120257 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:28:21.131285 extend-filesystems[1935]: Resized partition /dev/nvme0n1p9 Feb 13 20:28:21.151449 extend-filesystems[1984]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:28:21.166486 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 20:28:21.166597 coreos-metadata[1932]: Feb 13 20:28:21.161 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 20:28:21.166597 coreos-metadata[1932]: Feb 13 20:28:21.164 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 20:28:21.171259 coreos-metadata[1932]: Feb 13 20:28:21.168 INFO Fetch successful Feb 13 20:28:21.171259 coreos-metadata[1932]: Feb 13 20:28:21.171 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 20:28:21.170765 systemd-logind[1942]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:28:21.170792 systemd-logind[1942]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 20:28:21.170817 systemd-logind[1942]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:28:21.174030 coreos-metadata[1932]: Feb 13 20:28:21.172 INFO Fetch successful Feb 13 20:28:21.174030 coreos-metadata[1932]: Feb 13 20:28:21.172 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 20:28:21.175790 coreos-metadata[1932]: Feb 13 20:28:21.175 INFO Fetch successful Feb 13 20:28:21.175878 coreos-metadata[1932]: Feb 13 20:28:21.175 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 20:28:21.178213 coreos-metadata[1932]: Feb 13 20:28:21.176 INFO Fetch successful Feb 13 20:28:21.178213 coreos-metadata[1932]: Feb 13 20:28:21.176 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 20:28:21.178213 coreos-metadata[1932]: Feb 13 20:28:21.177 INFO Fetch failed with 404: resource not found Feb 13 20:28:21.178213 coreos-metadata[1932]: Feb 13 20:28:21.177 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 20:28:21.178213 coreos-metadata[1932]: Feb 13 20:28:21.178 INFO Fetch successful Feb 13 20:28:21.178213 coreos-metadata[1932]: Feb 13 20:28:21.178 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 20:28:21.178500 systemd-logind[1942]: New seat seat0. Feb 13 20:28:21.180024 coreos-metadata[1932]: Feb 13 20:28:21.179 INFO Fetch successful Feb 13 20:28:21.180024 coreos-metadata[1932]: Feb 13 20:28:21.179 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 20:28:21.182485 coreos-metadata[1932]: Feb 13 20:28:21.180 INFO Fetch successful Feb 13 20:28:21.182485 coreos-metadata[1932]: Feb 13 20:28:21.180 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 20:28:21.183208 coreos-metadata[1932]: Feb 13 20:28:21.183 INFO Fetch successful Feb 13 20:28:21.183284 coreos-metadata[1932]: Feb 13 20:28:21.183 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 20:28:21.186554 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:28:21.191156 coreos-metadata[1932]: Feb 13 20:28:21.191 INFO Fetch successful Feb 13 20:28:21.282027 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 20:28:21.317117 extend-filesystems[1984]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 20:28:21.317117 extend-filesystems[1984]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:28:21.317117 extend-filesystems[1984]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 20:28:21.325331 systemd-networkd[1807]: eth0: Gained IPv6LL Feb 13 20:28:21.326095 extend-filesystems[1935]: Resized filesystem in /dev/nvme0n1p9 Feb 13 20:28:21.329279 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:28:21.329647 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:28:21.360550 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1810) Feb 13 20:28:21.363376 bash[2004]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:28:21.376767 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:28:21.379406 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:28:21.381941 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:28:21.402951 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:28:21.426895 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 20:28:21.440296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:21.455606 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:28:21.457118 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:28:21.469767 systemd[1]: Starting sshkeys.service... Feb 13 20:28:21.604138 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:28:21.613743 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:28:21.772637 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:28:21.916544 dbus-daemon[1933]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 20:28:21.917108 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 20:28:21.926652 dbus-daemon[1933]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1980 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 20:28:21.945461 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 20:28:21.997100 amazon-ssm-agent[2031]: Initializing new seelog logger Feb 13 20:28:21.998870 amazon-ssm-agent[2031]: New Seelog Logger Creation Complete Feb 13 20:28:21.999252 amazon-ssm-agent[2031]: 2025/02/13 20:28:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:28:21.999252 amazon-ssm-agent[2031]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:28:21.999923 amazon-ssm-agent[2031]: 2025/02/13 20:28:21 processing appconfig overrides Feb 13 20:28:22.013127 amazon-ssm-agent[2031]: 2025/02/13 20:28:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:28:22.013127 amazon-ssm-agent[2031]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:28:22.013127 amazon-ssm-agent[2031]: 2025/02/13 20:28:22 processing appconfig overrides Feb 13 20:28:22.013127 amazon-ssm-agent[2031]: 2025/02/13 20:28:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:28:22.013127 amazon-ssm-agent[2031]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:28:22.016357 amazon-ssm-agent[2031]: 2025/02/13 20:28:22 processing appconfig overrides Feb 13 20:28:22.022311 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO Proxy environment variables: Feb 13 20:28:22.025432 polkitd[2099]: Started polkitd version 121 Feb 13 20:28:22.038476 amazon-ssm-agent[2031]: 2025/02/13 20:28:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:28:22.038476 amazon-ssm-agent[2031]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:28:22.041930 amazon-ssm-agent[2031]: 2025/02/13 20:28:22 processing appconfig overrides Feb 13 20:28:22.082303 polkitd[2099]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 20:28:22.082403 polkitd[2099]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 20:28:22.092733 polkitd[2099]: Finished loading, compiling and executing 2 rules Feb 13 20:28:22.099906 dbus-daemon[1933]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 20:28:22.100114 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 20:28:22.106360 polkitd[2099]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 20:28:22.108273 sshd_keygen[1970]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:28:22.126071 containerd[1967]: time="2025-02-13T20:28:22.125835498Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:28:22.145053 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO no_proxy: Feb 13 20:28:22.185309 coreos-metadata[2043]: Feb 13 20:28:22.183 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 20:28:22.187051 coreos-metadata[2043]: Feb 13 20:28:22.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 20:28:22.195103 coreos-metadata[2043]: Feb 13 20:28:22.193 INFO Fetch successful Feb 13 20:28:22.195103 coreos-metadata[2043]: Feb 13 20:28:22.193 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 20:28:22.197327 coreos-metadata[2043]: Feb 13 20:28:22.195 INFO Fetch successful Feb 13 20:28:22.200826 unknown[2043]: wrote ssh authorized keys file for user: core Feb 13 20:28:22.224634 locksmithd[1982]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:28:22.233906 systemd-hostnamed[1980]: Hostname set to (transient) Feb 13 20:28:22.234238 systemd-resolved[1760]: System hostname changed to 'ip-172-31-17-255'. Feb 13 20:28:22.248236 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO https_proxy: Feb 13 20:28:22.260935 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:28:22.279754 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:28:22.291840 update-ssh-keys[2142]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:28:22.300712 containerd[1967]: time="2025-02-13T20:28:22.300576716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:22.302572 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.309484782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.309610826Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.309643490Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.310062153Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.311535019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.311675616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.311696584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.312921497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.312950977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.312975325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:22.314776 containerd[1967]: time="2025-02-13T20:28:22.313000602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:22.310143 systemd[1]: Finished sshkeys.service. Feb 13 20:28:22.315860 containerd[1967]: time="2025-02-13T20:28:22.313213105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:22.315860 containerd[1967]: time="2025-02-13T20:28:22.315666980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:22.317010 containerd[1967]: time="2025-02-13T20:28:22.316108462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:22.317010 containerd[1967]: time="2025-02-13T20:28:22.316137722Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:28:22.317010 containerd[1967]: time="2025-02-13T20:28:22.316271680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:28:22.317010 containerd[1967]: time="2025-02-13T20:28:22.316337374Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:28:22.324954 containerd[1967]: time="2025-02-13T20:28:22.324402271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:28:22.324954 containerd[1967]: time="2025-02-13T20:28:22.324681778Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:28:22.324954 containerd[1967]: time="2025-02-13T20:28:22.324719340Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:28:22.324954 containerd[1967]: time="2025-02-13T20:28:22.324744597Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:28:22.324954 containerd[1967]: time="2025-02-13T20:28:22.324766973Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:28:22.324954 containerd[1967]: time="2025-02-13T20:28:22.324955009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325302315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325467731Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325490713Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325510255Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325529827Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325549087Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325568506Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325591697Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325612981Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325633976Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325652874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325672997Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325701717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326136 containerd[1967]: time="2025-02-13T20:28:22.325721047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325739174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325760643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325778209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325797159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325826230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325843685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325862754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325895522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325913545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325929808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325947985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.326699 containerd[1967]: time="2025-02-13T20:28:22.325972256Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:28:22.328935 containerd[1967]: time="2025-02-13T20:28:22.328102655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.328935 containerd[1967]: time="2025-02-13T20:28:22.328145589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.329107 containerd[1967]: time="2025-02-13T20:28:22.329043799Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:28:22.329340 containerd[1967]: time="2025-02-13T20:28:22.329308084Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:28:22.329466 containerd[1967]: time="2025-02-13T20:28:22.329441470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:28:22.329517 containerd[1967]: time="2025-02-13T20:28:22.329468288Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:28:22.329558 containerd[1967]: time="2025-02-13T20:28:22.329507520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:28:22.329558 containerd[1967]: time="2025-02-13T20:28:22.329524927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.329558 containerd[1967]: time="2025-02-13T20:28:22.329546247Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:28:22.329672 containerd[1967]: time="2025-02-13T20:28:22.329585047Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:28:22.329672 containerd[1967]: time="2025-02-13T20:28:22.329607073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:28:22.332228 containerd[1967]: time="2025-02-13T20:28:22.331243582Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:28:22.332493 containerd[1967]: time="2025-02-13T20:28:22.332224708Z" level=info msg="Connect containerd service" Feb 13 20:28:22.332493 containerd[1967]: time="2025-02-13T20:28:22.332307181Z" level=info msg="using legacy CRI server" Feb 13 20:28:22.332493 containerd[1967]: time="2025-02-13T20:28:22.332336681Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:28:22.335066 containerd[1967]: time="2025-02-13T20:28:22.333068874Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:28:22.335182 containerd[1967]: time="2025-02-13T20:28:22.335089403Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:28:22.335229 containerd[1967]: time="2025-02-13T20:28:22.335178122Z" level=info msg="Start subscribing containerd event" Feb 13 20:28:22.335265 containerd[1967]: time="2025-02-13T20:28:22.335240422Z" level=info msg="Start recovering state" Feb 13 20:28:22.335353 containerd[1967]: time="2025-02-13T20:28:22.335333781Z" level=info msg="Start event monitor" Feb 13 20:28:22.335395 containerd[1967]: time="2025-02-13T20:28:22.335355779Z" level=info msg="Start snapshots syncer" Feb 13 20:28:22.335395 containerd[1967]: time="2025-02-13T20:28:22.335369834Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:28:22.335395 containerd[1967]: time="2025-02-13T20:28:22.335381285Z" level=info msg="Start streaming server" Feb 13 20:28:22.336262 containerd[1967]: time="2025-02-13T20:28:22.336230513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:28:22.336346 containerd[1967]: time="2025-02-13T20:28:22.336302542Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:28:22.336387 containerd[1967]: time="2025-02-13T20:28:22.336363086Z" level=info msg="containerd successfully booted in 0.220460s" Feb 13 20:28:22.348076 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO http_proxy: Feb 13 20:28:22.343780 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:28:22.346185 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:28:22.346445 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:28:22.360816 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:28:22.445341 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO Checking if agent identity type OnPrem can be assumed Feb 13 20:28:22.450231 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:28:22.483110 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:28:22.504927 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:28:22.510345 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:28:22.543716 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO Checking if agent identity type EC2 can be assumed Feb 13 20:28:22.643482 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO Agent will take identity from EC2 Feb 13 20:28:22.748792 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:28:22.849147 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [Registrar] Starting registrar module Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [EC2Identity] EC2 registration was successful. Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [CredentialRefresher] credentialRefresher has started Feb 13 20:28:22.931021 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 20:28:22.932305 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 20:28:22.948554 amazon-ssm-agent[2031]: 2025-02-13 20:28:22 INFO [CredentialRefresher] Next credential rotation will be in 31.483326661716667 minutes Feb 13 20:28:23.921016 ntpd[1937]: Listen normally on 6 eth0 [fe80::4a7:3aff:fe11:2a63%2]:123 Feb 13 20:28:23.922781 ntpd[1937]: 13 Feb 20:28:23 ntpd[1937]: Listen normally on 6 eth0 [fe80::4a7:3aff:fe11:2a63%2]:123 Feb 13 20:28:23.954511 amazon-ssm-agent[2031]: 2025-02-13 20:28:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 20:28:24.055923 amazon-ssm-agent[2031]: 2025-02-13 20:28:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2170) started Feb 13 20:28:24.157678 amazon-ssm-agent[2031]: 2025-02-13 20:28:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 20:28:25.092445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:25.099496 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:28:25.377642 systemd[1]: Startup finished in 1.602s (kernel) + 11.196s (initrd) + 12.268s (userspace) = 25.067s. Feb 13 20:28:25.384384 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:28:26.955359 kubelet[2186]: E0213 20:28:26.955293 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:28:26.958349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:28:26.958559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:28:26.959496 systemd[1]: kubelet.service: Consumed 1.054s CPU time. Feb 13 20:28:28.692454 systemd-resolved[1760]: Clock change detected. Flushing caches. Feb 13 20:28:30.725307 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:28:30.731113 systemd[1]: Started sshd@0-172.31.17.255:22-139.178.89.65:57882.service - OpenSSH per-connection server daemon (139.178.89.65:57882). Feb 13 20:28:30.963760 sshd[2197]: Accepted publickey for core from 139.178.89.65 port 57882 ssh2: RSA SHA256:oIrfXud0229KMldKYGvC5cz5yd43qE0PzS6D4HJPtjA Feb 13 20:28:30.967214 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:30.986174 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:28:30.997316 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:28:31.018558 systemd-logind[1942]: New session 1 of user core. Feb 13 20:28:31.056420 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:28:31.070351 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:28:31.075055 (systemd)[2201]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:28:31.364980 systemd[2201]: Queued start job for default target default.target. Feb 13 20:28:31.375798 systemd[2201]: Created slice app.slice - User Application Slice. Feb 13 20:28:31.375845 systemd[2201]: Reached target paths.target - Paths. Feb 13 20:28:31.375867 systemd[2201]: Reached target timers.target - Timers. Feb 13 20:28:31.377402 systemd[2201]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:28:31.394323 systemd[2201]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:28:31.394514 systemd[2201]: Reached target sockets.target - Sockets. Feb 13 20:28:31.394536 systemd[2201]: Reached target basic.target - Basic System. Feb 13 20:28:31.394592 systemd[2201]: Reached target default.target - Main User Target. Feb 13 20:28:31.394630 systemd[2201]: Startup finished in 310ms. Feb 13 20:28:31.394766 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:28:31.404683 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:28:31.566601 systemd[1]: Started sshd@1-172.31.17.255:22-139.178.89.65:57886.service - OpenSSH per-connection server daemon (139.178.89.65:57886). Feb 13 20:28:31.736834 sshd[2212]: Accepted publickey for core from 139.178.89.65 port 57886 ssh2: RSA SHA256:oIrfXud0229KMldKYGvC5cz5yd43qE0PzS6D4HJPtjA Feb 13 20:28:31.739039 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:31.746771 systemd-logind[1942]: New session 2 of user core. Feb 13 20:28:31.754811 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:28:31.889720 sshd[2212]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:31.895006 systemd[1]: sshd@1-172.31.17.255:22-139.178.89.65:57886.service: Deactivated successfully. Feb 13 20:28:31.897111 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:28:31.898786 systemd-logind[1942]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:28:31.901415 systemd-logind[1942]: Removed session 2. Feb 13 20:28:31.952803 systemd[1]: Started sshd@2-172.31.17.255:22-139.178.89.65:57902.service - OpenSSH per-connection server daemon (139.178.89.65:57902). Feb 13 20:28:32.175565 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 57902 ssh2: RSA SHA256:oIrfXud0229KMldKYGvC5cz5yd43qE0PzS6D4HJPtjA Feb 13 20:28:32.177392 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:32.189033 systemd-logind[1942]: New session 3 of user core. Feb 13 20:28:32.194758 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:28:32.309140 sshd[2219]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:32.321890 systemd[1]: sshd@2-172.31.17.255:22-139.178.89.65:57902.service: Deactivated successfully. Feb 13 20:28:32.329988 systemd-logind[1942]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:28:32.330959 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:28:32.359910 systemd[1]: Started sshd@3-172.31.17.255:22-139.178.89.65:57912.service - OpenSSH per-connection server daemon (139.178.89.65:57912). Feb 13 20:28:32.366850 systemd-logind[1942]: Removed session 3. Feb 13 20:28:32.574291 sshd[2226]: Accepted publickey for core from 139.178.89.65 port 57912 ssh2: RSA SHA256:oIrfXud0229KMldKYGvC5cz5yd43qE0PzS6D4HJPtjA Feb 13 20:28:32.576317 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:32.582990 systemd-logind[1942]: New session 4 of user core. Feb 13 20:28:32.594697 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:28:32.721186 sshd[2226]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:32.725520 systemd[1]: sshd@3-172.31.17.255:22-139.178.89.65:57912.service: Deactivated successfully. Feb 13 20:28:32.729241 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:28:32.733880 systemd-logind[1942]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:28:32.741119 systemd-logind[1942]: Removed session 4. Feb 13 20:28:32.774895 systemd[1]: Started sshd@4-172.31.17.255:22-139.178.89.65:57924.service - OpenSSH per-connection server daemon (139.178.89.65:57924). Feb 13 20:28:32.982641 sshd[2233]: Accepted publickey for core from 139.178.89.65 port 57924 ssh2: RSA SHA256:oIrfXud0229KMldKYGvC5cz5yd43qE0PzS6D4HJPtjA Feb 13 20:28:32.985547 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:33.003486 systemd-logind[1942]: New session 5 of user core. Feb 13 20:28:33.012701 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:28:33.158507 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:28:33.162121 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:28:33.192170 sudo[2236]: pam_unix(sudo:session): session closed for user root Feb 13 20:28:33.217791 sshd[2233]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:33.229267 systemd[1]: sshd@4-172.31.17.255:22-139.178.89.65:57924.service: Deactivated successfully. Feb 13 20:28:33.231636 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:28:33.233627 systemd-logind[1942]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:28:33.236487 systemd-logind[1942]: Removed session 5. Feb 13 20:28:33.270010 systemd[1]: Started sshd@5-172.31.17.255:22-139.178.89.65:57940.service - OpenSSH per-connection server daemon (139.178.89.65:57940). Feb 13 20:28:33.463949 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 57940 ssh2: RSA SHA256:oIrfXud0229KMldKYGvC5cz5yd43qE0PzS6D4HJPtjA Feb 13 20:28:33.466743 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:33.494495 systemd-logind[1942]: New session 6 of user core. Feb 13 20:28:33.511853 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:28:33.625500 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:28:33.625903 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:28:33.635182 sudo[2245]: pam_unix(sudo:session): session closed for user root Feb 13 20:28:33.660855 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:28:33.661280 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:28:33.713275 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:28:33.740601 auditctl[2248]: No rules Feb 13 20:28:33.741089 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:28:33.741323 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:28:33.753162 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:28:33.868679 augenrules[2266]: No rules Feb 13 20:28:33.875014 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:28:33.876521 sudo[2244]: pam_unix(sudo:session): session closed for user root Feb 13 20:28:33.901584 sshd[2241]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:33.912470 systemd[1]: sshd@5-172.31.17.255:22-139.178.89.65:57940.service: Deactivated successfully. Feb 13 20:28:33.919326 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:28:33.924033 systemd-logind[1942]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:28:33.943224 systemd[1]: Started sshd@6-172.31.17.255:22-139.178.89.65:57956.service - OpenSSH per-connection server daemon (139.178.89.65:57956). Feb 13 20:28:33.947481 systemd-logind[1942]: Removed session 6. Feb 13 20:28:34.177443 sshd[2274]: Accepted publickey for core from 139.178.89.65 port 57956 ssh2: RSA SHA256:oIrfXud0229KMldKYGvC5cz5yd43qE0PzS6D4HJPtjA Feb 13 20:28:34.179056 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:34.195719 systemd-logind[1942]: New session 7 of user core. Feb 13 20:28:34.204703 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:28:34.317647 sudo[2277]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:28:34.318044 sudo[2277]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:28:35.977917 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:35.978174 systemd[1]: kubelet.service: Consumed 1.054s CPU time. Feb 13 20:28:35.995843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:36.167074 systemd[1]: Reloading requested from client PID 2310 ('systemctl') (unit session-7.scope)... Feb 13 20:28:36.167094 systemd[1]: Reloading... Feb 13 20:28:36.418260 zram_generator::config[2350]: No configuration found. Feb 13 20:28:36.624256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:36.775028 systemd[1]: Reloading finished in 607 ms. Feb 13 20:28:36.900690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:36.910599 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:28:36.915786 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:36.919669 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:28:36.930449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:36.952692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:37.372720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:37.375200 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:28:37.528420 kubelet[2413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:37.528907 kubelet[2413]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:28:37.528907 kubelet[2413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:37.529056 kubelet[2413]: I0213 20:28:37.529018 2413 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:28:38.604359 kubelet[2413]: I0213 20:28:38.604301 2413 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:28:38.604359 kubelet[2413]: I0213 20:28:38.604341 2413 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:28:38.608461 kubelet[2413]: I0213 20:28:38.605534 2413 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:28:38.652214 kubelet[2413]: I0213 20:28:38.652163 2413 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:28:38.668191 kubelet[2413]: E0213 20:28:38.668119 2413 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:28:38.668191 kubelet[2413]: I0213 20:28:38.668159 2413 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:28:38.673695 kubelet[2413]: I0213 20:28:38.673641 2413 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:28:38.673985 kubelet[2413]: I0213 20:28:38.673946 2413 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:28:38.674181 kubelet[2413]: I0213 20:28:38.673983 2413 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.17.255","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:28:38.674331 kubelet[2413]: I0213 20:28:38.674187 2413 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:28:38.674331 kubelet[2413]: I0213 20:28:38.674202 2413 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:28:38.674412 kubelet[2413]: I0213 20:28:38.674353 2413 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:38.683050 kubelet[2413]: I0213 20:28:38.682686 2413 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:28:38.683050 kubelet[2413]: I0213 20:28:38.682722 2413 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:28:38.683050 kubelet[2413]: I0213 20:28:38.682748 2413 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:28:38.683050 kubelet[2413]: I0213 20:28:38.682759 2413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:28:38.686562 kubelet[2413]: E0213 20:28:38.686378 2413 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:38.686562 kubelet[2413]: E0213 20:28:38.686456 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:38.691424 kubelet[2413]: I0213 20:28:38.690710 2413 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:28:38.691424 kubelet[2413]: I0213 20:28:38.691302 2413 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:28:38.692597 kubelet[2413]: W0213 20:28:38.692567 2413 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:28:38.696881 kubelet[2413]: I0213 20:28:38.695148 2413 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:28:38.696881 kubelet[2413]: I0213 20:28:38.695198 2413 server.go:1287] "Started kubelet" Feb 13 20:28:38.696881 kubelet[2413]: I0213 20:28:38.695820 2413 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:28:38.697557 kubelet[2413]: I0213 20:28:38.697535 2413 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:28:38.700266 kubelet[2413]: I0213 20:28:38.700202 2413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:28:38.700787 kubelet[2413]: I0213 20:28:38.700766 2413 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:28:38.702043 kubelet[2413]: I0213 20:28:38.702026 2413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:28:38.718241 kubelet[2413]: I0213 20:28:38.718197 2413 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:28:38.721317 kubelet[2413]: I0213 20:28:38.721281 2413 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:28:38.721680 kubelet[2413]: E0213 20:28:38.721661 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:38.722367 kubelet[2413]: I0213 20:28:38.722346 2413 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:28:38.722527 kubelet[2413]: I0213 20:28:38.722516 2413 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:28:38.739443 kubelet[2413]: I0213 20:28:38.739388 2413 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:28:38.740629 kubelet[2413]: I0213 20:28:38.740597 2413 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:28:38.748618 kubelet[2413]: W0213 20:28:38.748472 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.17.255" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 20:28:38.749127 kubelet[2413]: E0213 20:28:38.748652 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.17.255\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 20:28:38.754670 kubelet[2413]: E0213 20:28:38.748831 2413 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.255.1823de7f213f9c88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.255,UID:172.31.17.255,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.17.255,},FirstTimestamp:2025-02-13 20:28:38.695173256 +0000 UTC m=+1.301099491,LastTimestamp:2025-02-13 20:28:38.695173256 +0000 UTC m=+1.301099491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.255,}" Feb 13 20:28:38.755175 kubelet[2413]: W0213 20:28:38.755082 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 20:28:38.756380 kubelet[2413]: E0213 20:28:38.756346 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 20:28:38.757573 kubelet[2413]: I0213 20:28:38.756109 2413 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:28:38.762056 kubelet[2413]: E0213 20:28:38.762024 2413 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:28:38.773256 kubelet[2413]: I0213 20:28:38.773231 2413 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:28:38.773442 kubelet[2413]: I0213 20:28:38.773402 2413 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:28:38.773526 kubelet[2413]: I0213 20:28:38.773515 2413 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:38.775485 kubelet[2413]: I0213 20:28:38.775462 2413 policy_none.go:49] "None policy: Start" Feb 13 20:28:38.775485 kubelet[2413]: I0213 20:28:38.775484 2413 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:28:38.775785 kubelet[2413]: I0213 20:28:38.775498 2413 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:28:38.782589 kubelet[2413]: E0213 20:28:38.782559 2413 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.17.255\" not found" node="172.31.17.255" Feb 13 20:28:38.785745 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:28:38.804970 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:28:38.813311 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:28:38.822968 kubelet[2413]: E0213 20:28:38.822869 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:38.826461 kubelet[2413]: I0213 20:28:38.825386 2413 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:28:38.826461 kubelet[2413]: I0213 20:28:38.825625 2413 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:28:38.826461 kubelet[2413]: I0213 20:28:38.825650 2413 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:28:38.826461 kubelet[2413]: I0213 20:28:38.826313 2413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:28:38.830353 kubelet[2413]: E0213 20:28:38.830193 2413 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:28:38.830504 kubelet[2413]: E0213 20:28:38.830379 2413 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.255\" not found" Feb 13 20:28:38.948523 kubelet[2413]: I0213 20:28:38.943722 2413 kubelet_node_status.go:76] "Attempting to register node" node="172.31.17.255" Feb 13 20:28:38.959727 kubelet[2413]: I0213 20:28:38.959692 2413 kubelet_node_status.go:79] "Successfully registered node" node="172.31.17.255" Feb 13 20:28:38.959934 kubelet[2413]: E0213 20:28:38.959737 2413 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.17.255\": node \"172.31.17.255\" not found" Feb 13 20:28:38.976387 kubelet[2413]: E0213 20:28:38.976358 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.050108 kubelet[2413]: I0213 20:28:39.050030 2413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:28:39.052272 kubelet[2413]: I0213 20:28:39.052176 2413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:28:39.052272 kubelet[2413]: I0213 20:28:39.052256 2413 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:28:39.053769 kubelet[2413]: I0213 20:28:39.053043 2413 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:28:39.053769 kubelet[2413]: I0213 20:28:39.053061 2413 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:28:39.060332 kubelet[2413]: E0213 20:28:39.056574 2413 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 20:28:39.079551 kubelet[2413]: E0213 20:28:39.079508 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.180636 kubelet[2413]: E0213 20:28:39.180587 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.280882 kubelet[2413]: E0213 20:28:39.280749 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.381559 kubelet[2413]: E0213 20:28:39.381505 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.469061 sudo[2277]: pam_unix(sudo:session): session closed for user root Feb 13 20:28:39.482596 kubelet[2413]: E0213 20:28:39.482553 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.503071 sshd[2274]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:39.511259 systemd-logind[1942]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:28:39.512634 systemd[1]: sshd@6-172.31.17.255:22-139.178.89.65:57956.service: Deactivated successfully. Feb 13 20:28:39.515590 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:28:39.516915 systemd-logind[1942]: Removed session 7. Feb 13 20:28:39.590049 kubelet[2413]: E0213 20:28:39.586065 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.611374 kubelet[2413]: I0213 20:28:39.611335 2413 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 20:28:39.611865 kubelet[2413]: W0213 20:28:39.611550 2413 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:28:39.611865 kubelet[2413]: W0213 20:28:39.611598 2413 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:28:39.686318 kubelet[2413]: E0213 20:28:39.686265 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.687524 kubelet[2413]: E0213 20:28:39.687400 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:39.786709 kubelet[2413]: E0213 20:28:39.786656 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.887883 kubelet[2413]: E0213 20:28:39.887758 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:39.988488 kubelet[2413]: E0213 20:28:39.988421 2413 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.17.255\" not found" Feb 13 20:28:40.089421 kubelet[2413]: I0213 20:28:40.089387 2413 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 20:28:40.091146 containerd[1967]: time="2025-02-13T20:28:40.091089814Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:28:40.091950 kubelet[2413]: I0213 20:28:40.091571 2413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 20:28:40.688051 kubelet[2413]: I0213 20:28:40.687939 2413 apiserver.go:52] "Watching apiserver" Feb 13 20:28:40.688051 kubelet[2413]: E0213 20:28:40.687962 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:40.693687 kubelet[2413]: E0213 20:28:40.692524 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:40.720913 systemd[1]: Created slice kubepods-besteffort-pod13cf1627_1b8e_4fc9_87b4_6d454297c91c.slice - libcontainer container kubepods-besteffort-pod13cf1627_1b8e_4fc9_87b4_6d454297c91c.slice. Feb 13 20:28:40.729573 kubelet[2413]: I0213 20:28:40.729529 2413 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:28:40.742379 kubelet[2413]: I0213 20:28:40.740073 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-lib-modules\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.742379 kubelet[2413]: I0213 20:28:40.740119 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-var-lib-calico\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.742379 kubelet[2413]: I0213 20:28:40.741631 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bb98fc29-084d-4742-a951-d1e39bf46fb9-socket-dir\") pod \"csi-node-driver-gvmbv\" (UID: \"bb98fc29-084d-4742-a951-d1e39bf46fb9\") " pod="calico-system/csi-node-driver-gvmbv" Feb 13 20:28:40.742379 kubelet[2413]: I0213 20:28:40.741669 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1ef4f4b-dd7e-457d-ba9b-a588c382257d-xtables-lock\") pod \"kube-proxy-mjv29\" (UID: \"e1ef4f4b-dd7e-457d-ba9b-a588c382257d\") " pod="kube-system/kube-proxy-mjv29" Feb 13 20:28:40.742379 kubelet[2413]: I0213 20:28:40.741695 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4p6\" (UniqueName: \"kubernetes.io/projected/e1ef4f4b-dd7e-457d-ba9b-a588c382257d-kube-api-access-zg4p6\") pod \"kube-proxy-mjv29\" (UID: \"e1ef4f4b-dd7e-457d-ba9b-a588c382257d\") " pod="kube-system/kube-proxy-mjv29" Feb 13 20:28:40.742687 kubelet[2413]: I0213 20:28:40.741783 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-policysync\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.742687 kubelet[2413]: I0213 20:28:40.741808 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-cni-log-dir\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.742687 kubelet[2413]: I0213 20:28:40.741832 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bb98fc29-084d-4742-a951-d1e39bf46fb9-varrun\") pod \"csi-node-driver-gvmbv\" (UID: \"bb98fc29-084d-4742-a951-d1e39bf46fb9\") " pod="calico-system/csi-node-driver-gvmbv" Feb 13 20:28:40.742687 kubelet[2413]: I0213 20:28:40.741911 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bb98fc29-084d-4742-a951-d1e39bf46fb9-registration-dir\") pod \"csi-node-driver-gvmbv\" (UID: \"bb98fc29-084d-4742-a951-d1e39bf46fb9\") " pod="calico-system/csi-node-driver-gvmbv" Feb 13 20:28:40.742687 kubelet[2413]: I0213 20:28:40.741941 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62tz\" (UniqueName: \"kubernetes.io/projected/bb98fc29-084d-4742-a951-d1e39bf46fb9-kube-api-access-q62tz\") pod \"csi-node-driver-gvmbv\" (UID: \"bb98fc29-084d-4742-a951-d1e39bf46fb9\") " pod="calico-system/csi-node-driver-gvmbv" Feb 13 20:28:40.742881 kubelet[2413]: I0213 20:28:40.741964 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1ef4f4b-dd7e-457d-ba9b-a588c382257d-kube-proxy\") pod \"kube-proxy-mjv29\" (UID: \"e1ef4f4b-dd7e-457d-ba9b-a588c382257d\") " pod="kube-system/kube-proxy-mjv29" Feb 13 20:28:40.742881 kubelet[2413]: I0213 20:28:40.741986 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13cf1627-1b8e-4fc9-87b4-6d454297c91c-node-certs\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.742881 kubelet[2413]: I0213 20:28:40.742009 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13cf1627-1b8e-4fc9-87b4-6d454297c91c-tigera-ca-bundle\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.742881 kubelet[2413]: I0213 20:28:40.742033 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-var-run-calico\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.742881 kubelet[2413]: I0213 20:28:40.742057 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-cni-bin-dir\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.746346 kubelet[2413]: I0213 20:28:40.742081 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-cni-net-dir\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.746346 kubelet[2413]: I0213 20:28:40.742107 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-flexvol-driver-host\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.746346 kubelet[2413]: I0213 20:28:40.742137 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzd2w\" (UniqueName: \"kubernetes.io/projected/13cf1627-1b8e-4fc9-87b4-6d454297c91c-kube-api-access-lzd2w\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.746346 kubelet[2413]: I0213 20:28:40.742162 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb98fc29-084d-4742-a951-d1e39bf46fb9-kubelet-dir\") pod \"csi-node-driver-gvmbv\" (UID: \"bb98fc29-084d-4742-a951-d1e39bf46fb9\") " pod="calico-system/csi-node-driver-gvmbv" Feb 13 20:28:40.746346 kubelet[2413]: I0213 20:28:40.742186 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13cf1627-1b8e-4fc9-87b4-6d454297c91c-xtables-lock\") pod \"calico-node-jzg2v\" (UID: \"13cf1627-1b8e-4fc9-87b4-6d454297c91c\") " pod="calico-system/calico-node-jzg2v" Feb 13 20:28:40.746552 kubelet[2413]: I0213 20:28:40.742209 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1ef4f4b-dd7e-457d-ba9b-a588c382257d-lib-modules\") pod \"kube-proxy-mjv29\" (UID: \"e1ef4f4b-dd7e-457d-ba9b-a588c382257d\") " pod="kube-system/kube-proxy-mjv29" Feb 13 20:28:40.780990 systemd[1]: Created slice kubepods-besteffort-pode1ef4f4b_dd7e_457d_ba9b_a588c382257d.slice - libcontainer container kubepods-besteffort-pode1ef4f4b_dd7e_457d_ba9b_a588c382257d.slice. Feb 13 20:28:40.862533 kubelet[2413]: E0213 20:28:40.853620 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.862533 kubelet[2413]: W0213 20:28:40.853650 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.862533 kubelet[2413]: E0213 20:28:40.853691 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.862533 kubelet[2413]: E0213 20:28:40.854737 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.862533 kubelet[2413]: W0213 20:28:40.854754 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.862533 kubelet[2413]: E0213 20:28:40.854859 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.862533 kubelet[2413]: E0213 20:28:40.855017 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.862533 kubelet[2413]: W0213 20:28:40.855028 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.862533 kubelet[2413]: E0213 20:28:40.855108 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.862533 kubelet[2413]: E0213 20:28:40.855573 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.864129 kubelet[2413]: W0213 20:28:40.855586 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.864129 kubelet[2413]: E0213 20:28:40.855674 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.871359 kubelet[2413]: E0213 20:28:40.870709 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.871359 kubelet[2413]: W0213 20:28:40.870740 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.871359 kubelet[2413]: E0213 20:28:40.871224 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.871359 kubelet[2413]: W0213 20:28:40.871241 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.872208 kubelet[2413]: E0213 20:28:40.872096 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.872208 kubelet[2413]: W0213 20:28:40.872112 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.873352 kubelet[2413]: E0213 20:28:40.873239 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.873352 kubelet[2413]: W0213 20:28:40.873255 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.873702 kubelet[2413]: E0213 20:28:40.873688 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.873886 kubelet[2413]: W0213 20:28:40.873773 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.873886 kubelet[2413]: E0213 20:28:40.873795 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.874134 kubelet[2413]: E0213 20:28:40.874121 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.874389 kubelet[2413]: W0213 20:28:40.874205 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.874389 kubelet[2413]: E0213 20:28:40.875066 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.875527 kubelet[2413]: E0213 20:28:40.875514 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.875609 kubelet[2413]: W0213 20:28:40.875597 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.878493 kubelet[2413]: E0213 20:28:40.878469 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.884267 kubelet[2413]: E0213 20:28:40.884222 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.888461 kubelet[2413]: E0213 20:28:40.884422 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.888461 kubelet[2413]: E0213 20:28:40.884496 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.888461 kubelet[2413]: E0213 20:28:40.886232 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.888461 kubelet[2413]: E0213 20:28:40.886478 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.888461 kubelet[2413]: W0213 20:28:40.886510 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.888461 kubelet[2413]: E0213 20:28:40.886533 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.904206 kubelet[2413]: E0213 20:28:40.904135 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.904711 kubelet[2413]: W0213 20:28:40.904682 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.904973 kubelet[2413]: E0213 20:28:40.904953 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.908464 kubelet[2413]: E0213 20:28:40.906039 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.908647 kubelet[2413]: W0213 20:28:40.908625 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.908804 kubelet[2413]: E0213 20:28:40.908781 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.909160 kubelet[2413]: E0213 20:28:40.909145 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.909289 kubelet[2413]: W0213 20:28:40.909274 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.909884 kubelet[2413]: E0213 20:28:40.909854 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.910101 kubelet[2413]: E0213 20:28:40.910087 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.910266 kubelet[2413]: W0213 20:28:40.910168 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.910360 kubelet[2413]: E0213 20:28:40.910346 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.916255 kubelet[2413]: E0213 20:28:40.912437 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.916255 kubelet[2413]: W0213 20:28:40.913999 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.916255 kubelet[2413]: E0213 20:28:40.915572 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.917505 kubelet[2413]: E0213 20:28:40.916712 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.917505 kubelet[2413]: W0213 20:28:40.916729 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.917505 kubelet[2413]: E0213 20:28:40.917413 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.918006 kubelet[2413]: E0213 20:28:40.917631 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.918006 kubelet[2413]: W0213 20:28:40.917645 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.918006 kubelet[2413]: E0213 20:28:40.917681 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.918197 kubelet[2413]: E0213 20:28:40.918175 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.918197 kubelet[2413]: W0213 20:28:40.918193 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.918679 kubelet[2413]: E0213 20:28:40.918552 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.918800 kubelet[2413]: E0213 20:28:40.918771 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.918800 kubelet[2413]: W0213 20:28:40.918786 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.918921 kubelet[2413]: E0213 20:28:40.918901 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.919110 kubelet[2413]: E0213 20:28:40.919089 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.919110 kubelet[2413]: W0213 20:28:40.919104 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.919295 kubelet[2413]: E0213 20:28:40.919179 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.919997 kubelet[2413]: E0213 20:28:40.919465 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.919997 kubelet[2413]: W0213 20:28:40.919477 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.919997 kubelet[2413]: E0213 20:28:40.919502 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.919997 kubelet[2413]: E0213 20:28:40.919881 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.919997 kubelet[2413]: W0213 20:28:40.919896 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.919997 kubelet[2413]: E0213 20:28:40.919908 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:40.953536 kubelet[2413]: E0213 20:28:40.951033 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:40.954768 kubelet[2413]: W0213 20:28:40.954567 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:40.954768 kubelet[2413]: E0213 20:28:40.954699 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:41.064466 containerd[1967]: time="2025-02-13T20:28:41.064104507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jzg2v,Uid:13cf1627-1b8e-4fc9-87b4-6d454297c91c,Namespace:calico-system,Attempt:0,}" Feb 13 20:28:41.098340 containerd[1967]: time="2025-02-13T20:28:41.096685052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mjv29,Uid:e1ef4f4b-dd7e-457d-ba9b-a588c382257d,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:41.688156 kubelet[2413]: E0213 20:28:41.688119 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:41.758171 containerd[1967]: time="2025-02-13T20:28:41.758119978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:41.759101 containerd[1967]: time="2025-02-13T20:28:41.759053984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:41.760077 containerd[1967]: time="2025-02-13T20:28:41.760042327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:28:41.761495 containerd[1967]: time="2025-02-13T20:28:41.761460754Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:41.763085 containerd[1967]: time="2025-02-13T20:28:41.762864721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:28:41.764976 containerd[1967]: time="2025-02-13T20:28:41.764918429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:41.767284 containerd[1967]: time="2025-02-13T20:28:41.765839109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 668.579405ms" Feb 13 20:28:41.768577 containerd[1967]: time="2025-02-13T20:28:41.768539605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 704.346278ms" Feb 13 20:28:41.864269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23747038.mount: Deactivated successfully. Feb 13 20:28:42.057401 kubelet[2413]: E0213 20:28:42.054336 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:42.341447 containerd[1967]: time="2025-02-13T20:28:42.341285554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:42.344247 containerd[1967]: time="2025-02-13T20:28:42.343407576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:42.344247 containerd[1967]: time="2025-02-13T20:28:42.343462986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:42.344247 containerd[1967]: time="2025-02-13T20:28:42.343585311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:42.344247 containerd[1967]: time="2025-02-13T20:28:42.342852760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:42.344247 containerd[1967]: time="2025-02-13T20:28:42.342957936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:42.344247 containerd[1967]: time="2025-02-13T20:28:42.342998487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:42.344247 containerd[1967]: time="2025-02-13T20:28:42.343173090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:42.568313 systemd[1]: run-containerd-runc-k8s.io-017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56-runc.KY92bu.mount: Deactivated successfully. Feb 13 20:28:42.614587 systemd[1]: Started cri-containerd-15875968c53ca98bf567a134ce39d939138bb6475748a40f30b1fe03fab0ceb8.scope - libcontainer container 15875968c53ca98bf567a134ce39d939138bb6475748a40f30b1fe03fab0ceb8. Feb 13 20:28:42.632090 systemd[1]: Started cri-containerd-017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56.scope - libcontainer container 017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56. Feb 13 20:28:42.674792 containerd[1967]: time="2025-02-13T20:28:42.674728091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mjv29,Uid:e1ef4f4b-dd7e-457d-ba9b-a588c382257d,Namespace:kube-system,Attempt:0,} returns sandbox id \"15875968c53ca98bf567a134ce39d939138bb6475748a40f30b1fe03fab0ceb8\"" Feb 13 20:28:42.679215 containerd[1967]: time="2025-02-13T20:28:42.679059752Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:28:42.683691 containerd[1967]: time="2025-02-13T20:28:42.683564948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jzg2v,Uid:13cf1627-1b8e-4fc9-87b4-6d454297c91c,Namespace:calico-system,Attempt:0,} returns sandbox id \"017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56\"" Feb 13 20:28:42.691015 kubelet[2413]: E0213 20:28:42.690891 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:43.691118 kubelet[2413]: E0213 20:28:43.691058 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:44.056976 kubelet[2413]: E0213 20:28:44.056727 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:44.662761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365935608.mount: Deactivated successfully. Feb 13 20:28:44.691590 kubelet[2413]: E0213 20:28:44.691555 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:45.417676 containerd[1967]: time="2025-02-13T20:28:45.417613034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:45.418892 containerd[1967]: time="2025-02-13T20:28:45.418734738Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 20:28:45.422073 containerd[1967]: time="2025-02-13T20:28:45.421790322Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:45.424639 containerd[1967]: time="2025-02-13T20:28:45.424594184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:45.425397 containerd[1967]: time="2025-02-13T20:28:45.425354535Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.746225599s" Feb 13 20:28:45.425502 containerd[1967]: time="2025-02-13T20:28:45.425404993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 20:28:45.427783 containerd[1967]: time="2025-02-13T20:28:45.427753342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:28:45.429026 containerd[1967]: time="2025-02-13T20:28:45.428882532Z" level=info msg="CreateContainer within sandbox \"15875968c53ca98bf567a134ce39d939138bb6475748a40f30b1fe03fab0ceb8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:28:45.449618 containerd[1967]: time="2025-02-13T20:28:45.449566705Z" level=info msg="CreateContainer within sandbox \"15875968c53ca98bf567a134ce39d939138bb6475748a40f30b1fe03fab0ceb8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd4134aa1dde5780c22dd2f1caed7c11e1f24981c81cb42f1fdbff5da5b8db07\"" Feb 13 20:28:45.450363 containerd[1967]: time="2025-02-13T20:28:45.450330743Z" level=info msg="StartContainer for \"bd4134aa1dde5780c22dd2f1caed7c11e1f24981c81cb42f1fdbff5da5b8db07\"" Feb 13 20:28:45.490655 systemd[1]: Started cri-containerd-bd4134aa1dde5780c22dd2f1caed7c11e1f24981c81cb42f1fdbff5da5b8db07.scope - libcontainer container bd4134aa1dde5780c22dd2f1caed7c11e1f24981c81cb42f1fdbff5da5b8db07. Feb 13 20:28:45.533944 containerd[1967]: time="2025-02-13T20:28:45.533896045Z" level=info msg="StartContainer for \"bd4134aa1dde5780c22dd2f1caed7c11e1f24981c81cb42f1fdbff5da5b8db07\" returns successfully" Feb 13 20:28:45.693204 kubelet[2413]: E0213 20:28:45.693024 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:46.054357 kubelet[2413]: E0213 20:28:46.054200 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:46.142414 kubelet[2413]: I0213 20:28:46.142348 2413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mjv29" podStartSLOduration=4.393836853 podStartE2EDuration="7.142335599s" podCreationTimestamp="2025-02-13 20:28:39 +0000 UTC" firstStartedPulling="2025-02-13 20:28:42.678456243 +0000 UTC m=+5.284382470" lastFinishedPulling="2025-02-13 20:28:45.426954991 +0000 UTC m=+8.032881216" observedRunningTime="2025-02-13 20:28:46.142144477 +0000 UTC m=+8.748070721" watchObservedRunningTime="2025-02-13 20:28:46.142335599 +0000 UTC m=+8.748261842" Feb 13 20:28:46.193266 kubelet[2413]: E0213 20:28:46.193214 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.193422 kubelet[2413]: W0213 20:28:46.193400 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.193502 kubelet[2413]: E0213 20:28:46.193450 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.194906 kubelet[2413]: E0213 20:28:46.194869 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.194906 kubelet[2413]: W0213 20:28:46.194893 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.196723 kubelet[2413]: E0213 20:28:46.194917 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.196807 kubelet[2413]: E0213 20:28:46.196792 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.196873 kubelet[2413]: W0213 20:28:46.196810 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.196873 kubelet[2413]: E0213 20:28:46.196856 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.197762 kubelet[2413]: E0213 20:28:46.197742 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.197762 kubelet[2413]: W0213 20:28:46.197763 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.201615 kubelet[2413]: E0213 20:28:46.201496 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.204474 kubelet[2413]: E0213 20:28:46.204261 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.204474 kubelet[2413]: W0213 20:28:46.204306 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.204474 kubelet[2413]: E0213 20:28:46.204334 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.205277 kubelet[2413]: E0213 20:28:46.204698 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.205277 kubelet[2413]: W0213 20:28:46.204717 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.205277 kubelet[2413]: E0213 20:28:46.204733 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.205277 kubelet[2413]: E0213 20:28:46.204983 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.205277 kubelet[2413]: W0213 20:28:46.205032 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.205277 kubelet[2413]: E0213 20:28:46.205061 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.211621 kubelet[2413]: E0213 20:28:46.210851 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.211621 kubelet[2413]: W0213 20:28:46.210879 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.211621 kubelet[2413]: E0213 20:28:46.210905 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.213066 kubelet[2413]: E0213 20:28:46.213039 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.213149 kubelet[2413]: W0213 20:28:46.213067 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.213149 kubelet[2413]: E0213 20:28:46.213092 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.214317 kubelet[2413]: E0213 20:28:46.214276 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.214502 kubelet[2413]: W0213 20:28:46.214333 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.214502 kubelet[2413]: E0213 20:28:46.214356 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.217758 kubelet[2413]: E0213 20:28:46.215535 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.217758 kubelet[2413]: W0213 20:28:46.215553 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.217758 kubelet[2413]: E0213 20:28:46.215571 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.217758 kubelet[2413]: E0213 20:28:46.216723 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.217758 kubelet[2413]: W0213 20:28:46.216736 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.217758 kubelet[2413]: E0213 20:28:46.216753 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.217758 kubelet[2413]: E0213 20:28:46.217624 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.217758 kubelet[2413]: W0213 20:28:46.217637 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.217758 kubelet[2413]: E0213 20:28:46.217653 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.219854 kubelet[2413]: E0213 20:28:46.219831 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.219854 kubelet[2413]: W0213 20:28:46.219852 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.220203 kubelet[2413]: E0213 20:28:46.219870 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.221624 kubelet[2413]: E0213 20:28:46.221514 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.221624 kubelet[2413]: W0213 20:28:46.221539 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.221755 kubelet[2413]: E0213 20:28:46.221632 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.222685 kubelet[2413]: E0213 20:28:46.222667 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.222978 kubelet[2413]: W0213 20:28:46.222958 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.223077 kubelet[2413]: E0213 20:28:46.223062 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.223404 kubelet[2413]: E0213 20:28:46.223391 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.223505 kubelet[2413]: W0213 20:28:46.223490 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.223584 kubelet[2413]: E0213 20:28:46.223570 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.223972 kubelet[2413]: E0213 20:28:46.223958 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.224078 kubelet[2413]: W0213 20:28:46.224065 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.224152 kubelet[2413]: E0213 20:28:46.224140 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.224505 kubelet[2413]: E0213 20:28:46.224493 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.224589 kubelet[2413]: W0213 20:28:46.224578 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.224654 kubelet[2413]: E0213 20:28:46.224644 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.225202 kubelet[2413]: E0213 20:28:46.225057 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.225202 kubelet[2413]: W0213 20:28:46.225103 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.225202 kubelet[2413]: E0213 20:28:46.225118 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.317531 kubelet[2413]: E0213 20:28:46.317399 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.317531 kubelet[2413]: W0213 20:28:46.317444 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.317531 kubelet[2413]: E0213 20:28:46.317468 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.319215 kubelet[2413]: E0213 20:28:46.317794 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.319215 kubelet[2413]: W0213 20:28:46.317809 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.319215 kubelet[2413]: E0213 20:28:46.317839 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.319215 kubelet[2413]: E0213 20:28:46.318101 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.319215 kubelet[2413]: W0213 20:28:46.318111 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.319215 kubelet[2413]: E0213 20:28:46.318136 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.319215 kubelet[2413]: E0213 20:28:46.318356 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.319215 kubelet[2413]: W0213 20:28:46.318366 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.319215 kubelet[2413]: E0213 20:28:46.318381 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.319215 kubelet[2413]: E0213 20:28:46.318612 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.319874 kubelet[2413]: W0213 20:28:46.318622 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.319874 kubelet[2413]: E0213 20:28:46.319069 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.319874 kubelet[2413]: E0213 20:28:46.319240 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.319874 kubelet[2413]: W0213 20:28:46.319251 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.319874 kubelet[2413]: E0213 20:28:46.319267 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.319874 kubelet[2413]: E0213 20:28:46.319511 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.319874 kubelet[2413]: W0213 20:28:46.319522 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.319874 kubelet[2413]: E0213 20:28:46.319544 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.319874 kubelet[2413]: E0213 20:28:46.319864 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.319874 kubelet[2413]: W0213 20:28:46.319877 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.320910 kubelet[2413]: E0213 20:28:46.319903 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.320910 kubelet[2413]: E0213 20:28:46.320239 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.320910 kubelet[2413]: W0213 20:28:46.320251 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.320910 kubelet[2413]: E0213 20:28:46.320277 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.320910 kubelet[2413]: E0213 20:28:46.320788 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.320910 kubelet[2413]: W0213 20:28:46.320800 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.320910 kubelet[2413]: E0213 20:28:46.320816 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.321657 kubelet[2413]: E0213 20:28:46.321265 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.321657 kubelet[2413]: W0213 20:28:46.321276 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.321657 kubelet[2413]: E0213 20:28:46.321302 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.321657 kubelet[2413]: E0213 20:28:46.321563 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:46.321657 kubelet[2413]: W0213 20:28:46.321573 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:46.321657 kubelet[2413]: E0213 20:28:46.321586 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:46.693486 kubelet[2413]: E0213 20:28:46.693417 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:46.931232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580696983.mount: Deactivated successfully. Feb 13 20:28:47.089560 containerd[1967]: time="2025-02-13T20:28:47.089514132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:47.092838 containerd[1967]: time="2025-02-13T20:28:47.092542842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 20:28:47.094102 containerd[1967]: time="2025-02-13T20:28:47.094059605Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:47.099629 containerd[1967]: time="2025-02-13T20:28:47.098163081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:47.099629 containerd[1967]: time="2025-02-13T20:28:47.099467026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.671671413s" Feb 13 20:28:47.099629 containerd[1967]: time="2025-02-13T20:28:47.099511816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:28:47.102056 containerd[1967]: time="2025-02-13T20:28:47.102021322Z" level=info msg="CreateContainer within sandbox \"017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:28:47.124894 containerd[1967]: time="2025-02-13T20:28:47.124855309Z" level=info msg="CreateContainer within sandbox \"017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985\"" Feb 13 20:28:47.127362 containerd[1967]: time="2025-02-13T20:28:47.125793756Z" level=info msg="StartContainer for \"75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985\"" Feb 13 20:28:47.135132 kubelet[2413]: E0213 20:28:47.135078 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.135132 kubelet[2413]: W0213 20:28:47.135101 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.135132 kubelet[2413]: E0213 20:28:47.135124 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.137625 kubelet[2413]: E0213 20:28:47.137480 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.137625 kubelet[2413]: W0213 20:28:47.137501 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.137625 kubelet[2413]: E0213 20:28:47.137527 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.137793 kubelet[2413]: E0213 20:28:47.137769 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.137793 kubelet[2413]: W0213 20:28:47.137782 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.138240 kubelet[2413]: E0213 20:28:47.137795 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.138240 kubelet[2413]: E0213 20:28:47.138116 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.138240 kubelet[2413]: W0213 20:28:47.138127 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.138240 kubelet[2413]: E0213 20:28:47.138142 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.138596 kubelet[2413]: E0213 20:28:47.138439 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.138596 kubelet[2413]: W0213 20:28:47.138451 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.138596 kubelet[2413]: E0213 20:28:47.138465 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.139071 kubelet[2413]: E0213 20:28:47.138662 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.139071 kubelet[2413]: W0213 20:28:47.138672 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.139071 kubelet[2413]: E0213 20:28:47.138683 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.139389 kubelet[2413]: E0213 20:28:47.139096 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.139389 kubelet[2413]: W0213 20:28:47.139107 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.139389 kubelet[2413]: E0213 20:28:47.139123 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.139713 kubelet[2413]: E0213 20:28:47.139515 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.139713 kubelet[2413]: W0213 20:28:47.139526 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.139713 kubelet[2413]: E0213 20:28:47.139539 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.140040 kubelet[2413]: E0213 20:28:47.140019 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.140040 kubelet[2413]: W0213 20:28:47.140037 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.140293 kubelet[2413]: E0213 20:28:47.140050 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.140869 kubelet[2413]: E0213 20:28:47.140850 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.140869 kubelet[2413]: W0213 20:28:47.140865 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.140869 kubelet[2413]: E0213 20:28:47.140878 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.141202 kubelet[2413]: E0213 20:28:47.141110 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.141202 kubelet[2413]: W0213 20:28:47.141121 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.141202 kubelet[2413]: E0213 20:28:47.141133 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.141366 kubelet[2413]: E0213 20:28:47.141351 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.141366 kubelet[2413]: W0213 20:28:47.141364 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.141506 kubelet[2413]: E0213 20:28:47.141377 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.141692 kubelet[2413]: E0213 20:28:47.141661 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.141692 kubelet[2413]: W0213 20:28:47.141684 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.141932 kubelet[2413]: E0213 20:28:47.141697 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.142049 kubelet[2413]: E0213 20:28:47.142032 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.142049 kubelet[2413]: W0213 20:28:47.142047 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.142140 kubelet[2413]: E0213 20:28:47.142060 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.142611 kubelet[2413]: E0213 20:28:47.142596 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.142611 kubelet[2413]: W0213 20:28:47.142611 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.142835 kubelet[2413]: E0213 20:28:47.142624 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.142835 kubelet[2413]: E0213 20:28:47.142830 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.142945 kubelet[2413]: W0213 20:28:47.142841 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.142945 kubelet[2413]: E0213 20:28:47.142853 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.143363 kubelet[2413]: E0213 20:28:47.143152 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.143363 kubelet[2413]: W0213 20:28:47.143165 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.143363 kubelet[2413]: E0213 20:28:47.143178 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.143530 kubelet[2413]: E0213 20:28:47.143405 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.143530 kubelet[2413]: W0213 20:28:47.143416 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.143530 kubelet[2413]: E0213 20:28:47.143475 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.143776 kubelet[2413]: E0213 20:28:47.143749 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.143776 kubelet[2413]: W0213 20:28:47.143770 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.143891 kubelet[2413]: E0213 20:28:47.143783 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.144327 kubelet[2413]: E0213 20:28:47.144085 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.144327 kubelet[2413]: W0213 20:28:47.144099 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.144327 kubelet[2413]: E0213 20:28:47.144111 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.174663 systemd[1]: Started cri-containerd-75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985.scope - libcontainer container 75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985. Feb 13 20:28:47.233304 kubelet[2413]: E0213 20:28:47.232783 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.233304 kubelet[2413]: W0213 20:28:47.232814 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.233304 kubelet[2413]: E0213 20:28:47.232839 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.233616 containerd[1967]: time="2025-02-13T20:28:47.233068239Z" level=info msg="StartContainer for \"75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985\" returns successfully" Feb 13 20:28:47.234030 kubelet[2413]: E0213 20:28:47.233759 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.234030 kubelet[2413]: W0213 20:28:47.233779 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.234030 kubelet[2413]: E0213 20:28:47.233800 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.235447 kubelet[2413]: E0213 20:28:47.234643 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.235447 kubelet[2413]: W0213 20:28:47.234659 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.235447 kubelet[2413]: E0213 20:28:47.234674 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.235715 kubelet[2413]: E0213 20:28:47.235699 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.235816 kubelet[2413]: W0213 20:28:47.235803 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.235894 kubelet[2413]: E0213 20:28:47.235881 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.237381 kubelet[2413]: E0213 20:28:47.236904 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.237531 kubelet[2413]: W0213 20:28:47.237515 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.237999 kubelet[2413]: E0213 20:28:47.237970 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.238811 kubelet[2413]: E0213 20:28:47.238458 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.238811 kubelet[2413]: W0213 20:28:47.238473 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.238811 kubelet[2413]: E0213 20:28:47.238489 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.239863 kubelet[2413]: E0213 20:28:47.239825 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.239863 kubelet[2413]: W0213 20:28:47.239842 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.240567 kubelet[2413]: E0213 20:28:47.240296 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.240921 kubelet[2413]: E0213 20:28:47.240664 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.240921 kubelet[2413]: W0213 20:28:47.240677 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.240921 kubelet[2413]: E0213 20:28:47.240697 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.241227 kubelet[2413]: E0213 20:28:47.241178 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.241227 kubelet[2413]: W0213 20:28:47.241194 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.241227 kubelet[2413]: E0213 20:28:47.241208 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.242027 kubelet[2413]: E0213 20:28:47.241678 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.242027 kubelet[2413]: W0213 20:28:47.241692 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.242027 kubelet[2413]: E0213 20:28:47.241704 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.242446 kubelet[2413]: E0213 20:28:47.242413 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.242813 kubelet[2413]: W0213 20:28:47.242522 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.242813 kubelet[2413]: E0213 20:28:47.242541 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.242998 kubelet[2413]: E0213 20:28:47.242986 2413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:28:47.243086 kubelet[2413]: W0213 20:28:47.243073 2413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:28:47.243164 kubelet[2413]: E0213 20:28:47.243151 2413 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:28:47.248643 systemd[1]: cri-containerd-75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985.scope: Deactivated successfully. Feb 13 20:28:47.508303 containerd[1967]: time="2025-02-13T20:28:47.508061127Z" level=info msg="shim disconnected" id=75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985 namespace=k8s.io Feb 13 20:28:47.508303 containerd[1967]: time="2025-02-13T20:28:47.508195663Z" level=warning msg="cleaning up after shim disconnected" id=75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985 namespace=k8s.io Feb 13 20:28:47.508303 containerd[1967]: time="2025-02-13T20:28:47.508208356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:47.694290 kubelet[2413]: E0213 20:28:47.694238 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:47.869906 systemd[1]: run-containerd-runc-k8s.io-75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985-runc.yBqPTb.mount: Deactivated successfully. Feb 13 20:28:47.870029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75998c376ea33a6e83cdd033db49e37bf9b9056b45e49da6e01911872e496985-rootfs.mount: Deactivated successfully. Feb 13 20:28:48.055327 kubelet[2413]: E0213 20:28:48.054741 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:48.132796 containerd[1967]: time="2025-02-13T20:28:48.131482046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:28:48.694701 kubelet[2413]: E0213 20:28:48.694598 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:49.695866 kubelet[2413]: E0213 20:28:49.695807 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:50.056249 kubelet[2413]: E0213 20:28:50.054347 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:50.697136 kubelet[2413]: E0213 20:28:50.697095 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:51.700034 kubelet[2413]: E0213 20:28:51.699992 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:52.054459 kubelet[2413]: E0213 20:28:52.054240 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:52.700588 kubelet[2413]: E0213 20:28:52.700511 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:53.054990 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 20:28:53.702077 kubelet[2413]: E0213 20:28:53.701951 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:53.838952 containerd[1967]: time="2025-02-13T20:28:53.838787404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:53.841188 containerd[1967]: time="2025-02-13T20:28:53.840920885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:28:53.843573 containerd[1967]: time="2025-02-13T20:28:53.843527929Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:53.865204 containerd[1967]: time="2025-02-13T20:28:53.865117054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:53.868964 containerd[1967]: time="2025-02-13T20:28:53.868705329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.737100105s" Feb 13 20:28:53.868964 containerd[1967]: time="2025-02-13T20:28:53.868756516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:28:53.874830 containerd[1967]: time="2025-02-13T20:28:53.874781234Z" level=info msg="CreateContainer within sandbox \"017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:28:53.899174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977682820.mount: Deactivated successfully. Feb 13 20:28:53.907185 containerd[1967]: time="2025-02-13T20:28:53.907140472Z" level=info msg="CreateContainer within sandbox \"017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b\"" Feb 13 20:28:53.907811 containerd[1967]: time="2025-02-13T20:28:53.907779072Z" level=info msg="StartContainer for \"08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b\"" Feb 13 20:28:53.960984 systemd[1]: Started cri-containerd-08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b.scope - libcontainer container 08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b. Feb 13 20:28:54.019633 containerd[1967]: time="2025-02-13T20:28:54.019580518Z" level=info msg="StartContainer for \"08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b\" returns successfully" Feb 13 20:28:54.053955 kubelet[2413]: E0213 20:28:54.053482 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:54.702725 kubelet[2413]: E0213 20:28:54.702683 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:54.855396 systemd[1]: cri-containerd-08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b.scope: Deactivated successfully. Feb 13 20:28:54.905555 kubelet[2413]: I0213 20:28:54.905516 2413 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 20:28:54.972911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b-rootfs.mount: Deactivated successfully. Feb 13 20:28:55.375633 containerd[1967]: time="2025-02-13T20:28:55.375569208Z" level=info msg="shim disconnected" id=08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b namespace=k8s.io Feb 13 20:28:55.375633 containerd[1967]: time="2025-02-13T20:28:55.375634751Z" level=warning msg="cleaning up after shim disconnected" id=08c3da3fd45751953299f94e104389591d26ee9e26c695f42da0d067636abb3b namespace=k8s.io Feb 13 20:28:55.376398 containerd[1967]: time="2025-02-13T20:28:55.375646498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:55.703327 kubelet[2413]: E0213 20:28:55.703191 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:56.065029 systemd[1]: Created slice kubepods-besteffort-podbb98fc29_084d_4742_a951_d1e39bf46fb9.slice - libcontainer container kubepods-besteffort-podbb98fc29_084d_4742_a951_d1e39bf46fb9.slice. Feb 13 20:28:56.071940 containerd[1967]: time="2025-02-13T20:28:56.071873092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvmbv,Uid:bb98fc29-084d-4742-a951-d1e39bf46fb9,Namespace:calico-system,Attempt:0,}" Feb 13 20:28:56.200592 containerd[1967]: time="2025-02-13T20:28:56.199224450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:28:56.277156 containerd[1967]: time="2025-02-13T20:28:56.276795918Z" level=error msg="Failed to destroy network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:56.277540 containerd[1967]: time="2025-02-13T20:28:56.277499133Z" level=error msg="encountered an error cleaning up failed sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:56.277642 containerd[1967]: time="2025-02-13T20:28:56.277572260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvmbv,Uid:bb98fc29-084d-4742-a951-d1e39bf46fb9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:56.283383 kubelet[2413]: E0213 20:28:56.282362 2413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:56.283383 kubelet[2413]: E0213 20:28:56.282467 2413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gvmbv" Feb 13 20:28:56.283383 kubelet[2413]: E0213 20:28:56.282500 2413 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gvmbv" Feb 13 20:28:56.283360 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e-shm.mount: Deactivated successfully. Feb 13 20:28:56.291089 kubelet[2413]: E0213 20:28:56.283131 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gvmbv_calico-system(bb98fc29-084d-4742-a951-d1e39bf46fb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gvmbv_calico-system(bb98fc29-084d-4742-a951-d1e39bf46fb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:56.705491 kubelet[2413]: E0213 20:28:56.704235 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:57.197669 kubelet[2413]: I0213 20:28:57.197101 2413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:28:57.198238 containerd[1967]: time="2025-02-13T20:28:57.198196311Z" level=info msg="StopPodSandbox for \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\"" Feb 13 20:28:57.198736 containerd[1967]: time="2025-02-13T20:28:57.198413525Z" level=info msg="Ensure that sandbox f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e in task-service has been cleanup successfully" Feb 13 20:28:57.229570 containerd[1967]: time="2025-02-13T20:28:57.229511070Z" level=error msg="StopPodSandbox for \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\" failed" error="failed to destroy network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:57.229871 kubelet[2413]: E0213 20:28:57.229829 2413 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:28:57.229976 kubelet[2413]: E0213 20:28:57.229905 2413 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e"} Feb 13 20:28:57.230031 kubelet[2413]: E0213 20:28:57.229977 2413 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb98fc29-084d-4742-a951-d1e39bf46fb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:28:57.230031 kubelet[2413]: E0213 20:28:57.230011 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb98fc29-084d-4742-a951-d1e39bf46fb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvmbv" podUID="bb98fc29-084d-4742-a951-d1e39bf46fb9" Feb 13 20:28:57.705292 kubelet[2413]: E0213 20:28:57.705223 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:58.685452 kubelet[2413]: E0213 20:28:58.683884 2413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:58.706283 kubelet[2413]: E0213 20:28:58.706242 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:59.245584 systemd[1]: Created slice kubepods-besteffort-pod5697233f_2fb9_4866_b632_351f92190f88.slice - libcontainer container kubepods-besteffort-pod5697233f_2fb9_4866_b632_351f92190f88.slice. Feb 13 20:28:59.252176 kubelet[2413]: I0213 20:28:59.251133 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5t2k\" (UniqueName: \"kubernetes.io/projected/5697233f-2fb9-4866-b632-351f92190f88-kube-api-access-h5t2k\") pod \"nginx-deployment-7fcdb87857-2wrr7\" (UID: \"5697233f-2fb9-4866-b632-351f92190f88\") " pod="default/nginx-deployment-7fcdb87857-2wrr7" Feb 13 20:28:59.555685 containerd[1967]: time="2025-02-13T20:28:59.555571389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2wrr7,Uid:5697233f-2fb9-4866-b632-351f92190f88,Namespace:default,Attempt:0,}" Feb 13 20:28:59.707422 kubelet[2413]: E0213 20:28:59.707333 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:28:59.732295 containerd[1967]: time="2025-02-13T20:28:59.732124483Z" level=error msg="Failed to destroy network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:59.735536 containerd[1967]: time="2025-02-13T20:28:59.735470150Z" level=error msg="encountered an error cleaning up failed sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:59.735773 containerd[1967]: time="2025-02-13T20:28:59.735728345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2wrr7,Uid:5697233f-2fb9-4866-b632-351f92190f88,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:59.737052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c-shm.mount: Deactivated successfully. Feb 13 20:28:59.737308 kubelet[2413]: E0213 20:28:59.737030 2413 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:28:59.737308 kubelet[2413]: E0213 20:28:59.737124 2413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-2wrr7" Feb 13 20:28:59.737641 kubelet[2413]: E0213 20:28:59.737472 2413 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-2wrr7" Feb 13 20:28:59.737641 kubelet[2413]: E0213 20:28:59.737566 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-2wrr7_default(5697233f-2fb9-4866-b632-351f92190f88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-2wrr7_default(5697233f-2fb9-4866-b632-351f92190f88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-2wrr7" podUID="5697233f-2fb9-4866-b632-351f92190f88" Feb 13 20:29:00.220707 kubelet[2413]: I0213 20:29:00.220670 2413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:00.235743 containerd[1967]: time="2025-02-13T20:29:00.235699137Z" level=info msg="StopPodSandbox for \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\"" Feb 13 20:29:00.236735 containerd[1967]: time="2025-02-13T20:29:00.236697959Z" level=info msg="Ensure that sandbox f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c in task-service has been cleanup successfully" Feb 13 20:29:00.389607 containerd[1967]: time="2025-02-13T20:29:00.389547771Z" level=error msg="StopPodSandbox for \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\" failed" error="failed to destroy network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:29:00.389848 kubelet[2413]: E0213 20:29:00.389801 2413 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:00.389950 kubelet[2413]: E0213 20:29:00.389868 2413 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c"} Feb 13 20:29:00.389950 kubelet[2413]: E0213 20:29:00.389916 2413 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5697233f-2fb9-4866-b632-351f92190f88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:29:00.390077 kubelet[2413]: E0213 20:29:00.389947 2413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5697233f-2fb9-4866-b632-351f92190f88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-2wrr7" podUID="5697233f-2fb9-4866-b632-351f92190f88" Feb 13 20:29:00.707933 kubelet[2413]: E0213 20:29:00.707881 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:01.708983 kubelet[2413]: E0213 20:29:01.708891 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:02.709236 kubelet[2413]: E0213 20:29:02.709195 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:03.722989 kubelet[2413]: E0213 20:29:03.722941 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:04.724112 kubelet[2413]: E0213 20:29:04.724071 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:05.725169 kubelet[2413]: E0213 20:29:05.725128 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:06.726447 kubelet[2413]: E0213 20:29:06.726388 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:07.094216 update_engine[1943]: I20250213 20:29:07.093472 1943 update_attempter.cc:509] Updating boot flags... Feb 13 20:29:07.238681 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3069) Feb 13 20:29:07.722498 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3068) Feb 13 20:29:07.729450 kubelet[2413]: E0213 20:29:07.729350 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:08.729809 kubelet[2413]: E0213 20:29:08.729764 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:09.100035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149291935.mount: Deactivated successfully. Feb 13 20:29:09.178457 containerd[1967]: time="2025-02-13T20:29:09.178389306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:09.180033 containerd[1967]: time="2025-02-13T20:29:09.179971810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:29:09.184457 containerd[1967]: time="2025-02-13T20:29:09.182363067Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:09.193770 containerd[1967]: time="2025-02-13T20:29:09.193720143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:09.194812 containerd[1967]: time="2025-02-13T20:29:09.194773469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 12.994391648s" Feb 13 20:29:09.195076 containerd[1967]: time="2025-02-13T20:29:09.195049600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:29:09.220698 containerd[1967]: time="2025-02-13T20:29:09.220649416Z" level=info msg="CreateContainer within sandbox \"017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:29:09.279023 containerd[1967]: time="2025-02-13T20:29:09.278871412Z" level=info msg="CreateContainer within sandbox \"017ce16f79bf99e0084c13e088655293387fee5e66d25d8cbfdb9015b16e5e56\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"514ad722e5a8fcde4ed6297c93a912fcdcea4f58c71cb2ae78ca63c4c5df5034\"" Feb 13 20:29:09.283456 containerd[1967]: time="2025-02-13T20:29:09.279824724Z" level=info msg="StartContainer for \"514ad722e5a8fcde4ed6297c93a912fcdcea4f58c71cb2ae78ca63c4c5df5034\"" Feb 13 20:29:09.418639 systemd[1]: Started cri-containerd-514ad722e5a8fcde4ed6297c93a912fcdcea4f58c71cb2ae78ca63c4c5df5034.scope - libcontainer container 514ad722e5a8fcde4ed6297c93a912fcdcea4f58c71cb2ae78ca63c4c5df5034. Feb 13 20:29:09.487980 containerd[1967]: time="2025-02-13T20:29:09.487843536Z" level=info msg="StartContainer for \"514ad722e5a8fcde4ed6297c93a912fcdcea4f58c71cb2ae78ca63c4c5df5034\" returns successfully" Feb 13 20:29:09.657746 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:29:09.658534 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:29:09.737062 kubelet[2413]: E0213 20:29:09.736489 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:10.353994 kubelet[2413]: I0213 20:29:10.353505 2413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jzg2v" podStartSLOduration=4.84329914 podStartE2EDuration="31.353457106s" podCreationTimestamp="2025-02-13 20:28:39 +0000 UTC" firstStartedPulling="2025-02-13 20:28:42.685757356 +0000 UTC m=+5.291683577" lastFinishedPulling="2025-02-13 20:29:09.19591532 +0000 UTC m=+31.801841543" observedRunningTime="2025-02-13 20:29:10.347021168 +0000 UTC m=+32.952947412" watchObservedRunningTime="2025-02-13 20:29:10.353457106 +0000 UTC m=+32.959383344" Feb 13 20:29:10.737139 kubelet[2413]: E0213 20:29:10.736651 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:11.054687 containerd[1967]: time="2025-02-13T20:29:11.054518241Z" level=info msg="StopPodSandbox for \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\"" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.218 [INFO][3339] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.218 [INFO][3339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" iface="eth0" netns="/var/run/netns/cni-a5146596-668f-ccc2-6d12-b03f6797480a" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.219 [INFO][3339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" iface="eth0" netns="/var/run/netns/cni-a5146596-668f-ccc2-6d12-b03f6797480a" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.224 [INFO][3339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" iface="eth0" netns="/var/run/netns/cni-a5146596-668f-ccc2-6d12-b03f6797480a" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.224 [INFO][3339] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.224 [INFO][3339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.357 [INFO][3346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.357 [INFO][3346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.357 [INFO][3346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.408 [WARNING][3346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.408 [INFO][3346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.412 [INFO][3346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:11.439895 containerd[1967]: 2025-02-13 20:29:11.422 [INFO][3339] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:11.441004 containerd[1967]: time="2025-02-13T20:29:11.440535648Z" level=info msg="TearDown network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\" successfully" Feb 13 20:29:11.441004 containerd[1967]: time="2025-02-13T20:29:11.440589134Z" level=info msg="StopPodSandbox for \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\" returns successfully" Feb 13 20:29:11.443326 systemd[1]: run-netns-cni\x2da5146596\x2d668f\x2dccc2\x2d6d12\x2db03f6797480a.mount: Deactivated successfully. Feb 13 20:29:11.450727 containerd[1967]: time="2025-02-13T20:29:11.448989954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvmbv,Uid:bb98fc29-084d-4742-a951-d1e39bf46fb9,Namespace:calico-system,Attempt:1,}" Feb 13 20:29:11.737684 kubelet[2413]: E0213 20:29:11.737493 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:11.900403 (udev-worker)[3078]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:29:11.905977 systemd-networkd[1807]: cali735add3397c: Link UP Feb 13 20:29:11.907708 systemd-networkd[1807]: cali735add3397c: Gained carrier Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.597 [INFO][3410] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.645 [INFO][3410] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.255-k8s-csi--node--driver--gvmbv-eth0 csi-node-driver- calico-system bb98fc29-084d-4742-a951-d1e39bf46fb9 1096 0 2025-02-13 20:28:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.17.255 csi-node-driver-gvmbv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali735add3397c [] []}} ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Namespace="calico-system" Pod="csi-node-driver-gvmbv" WorkloadEndpoint="172.31.17.255-k8s-csi--node--driver--gvmbv-" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.645 [INFO][3410] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Namespace="calico-system" Pod="csi-node-driver-gvmbv" WorkloadEndpoint="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.780 [INFO][3466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" HandleID="k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.803 [INFO][3466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" HandleID="k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290df0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.17.255", "pod":"csi-node-driver-gvmbv", "timestamp":"2025-02-13 20:29:11.780771901 +0000 UTC"}, Hostname:"172.31.17.255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.803 [INFO][3466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.803 [INFO][3466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.803 [INFO][3466] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.255' Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.809 [INFO][3466] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.822 [INFO][3466] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.833 [INFO][3466] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.838 [INFO][3466] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.841 [INFO][3466] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.842 [INFO][3466] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.844 [INFO][3466] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.859 [INFO][3466] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.868 [INFO][3466] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.193/26] block=192.168.126.192/26 handle="k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.868 [INFO][3466] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.193/26] handle="k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" host="172.31.17.255" Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.868 [INFO][3466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:11.963234 containerd[1967]: 2025-02-13 20:29:11.869 [INFO][3466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.193/26] IPv6=[] ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" HandleID="k8s-pod-network.02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.964324 containerd[1967]: 2025-02-13 20:29:11.874 [INFO][3410] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Namespace="calico-system" Pod="csi-node-driver-gvmbv" WorkloadEndpoint="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-csi--node--driver--gvmbv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb98fc29-084d-4742-a951-d1e39bf46fb9", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 28, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"", Pod:"csi-node-driver-gvmbv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali735add3397c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:11.964324 containerd[1967]: 2025-02-13 20:29:11.874 [INFO][3410] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.193/32] ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Namespace="calico-system" Pod="csi-node-driver-gvmbv" WorkloadEndpoint="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.964324 containerd[1967]: 2025-02-13 20:29:11.874 [INFO][3410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali735add3397c ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Namespace="calico-system" Pod="csi-node-driver-gvmbv" WorkloadEndpoint="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.964324 containerd[1967]: 2025-02-13 20:29:11.905 [INFO][3410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Namespace="calico-system" Pod="csi-node-driver-gvmbv" WorkloadEndpoint="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:11.964324 containerd[1967]: 2025-02-13 20:29:11.907 [INFO][3410] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Namespace="calico-system" Pod="csi-node-driver-gvmbv" WorkloadEndpoint="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-csi--node--driver--gvmbv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb98fc29-084d-4742-a951-d1e39bf46fb9", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 28, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa", Pod:"csi-node-driver-gvmbv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali735add3397c", MAC:"b6:04:6f:5e:82:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:11.964324 containerd[1967]: 2025-02-13 20:29:11.957 [INFO][3410] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa" Namespace="calico-system" Pod="csi-node-driver-gvmbv" WorkloadEndpoint="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:12.026360 containerd[1967]: time="2025-02-13T20:29:12.024944186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:29:12.026360 containerd[1967]: time="2025-02-13T20:29:12.025015819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:29:12.026360 containerd[1967]: time="2025-02-13T20:29:12.025034916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:29:12.026360 containerd[1967]: time="2025-02-13T20:29:12.025171631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:29:12.074814 systemd[1]: Started cri-containerd-02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa.scope - libcontainer container 02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa. Feb 13 20:29:12.154692 containerd[1967]: time="2025-02-13T20:29:12.154406932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvmbv,Uid:bb98fc29-084d-4742-a951-d1e39bf46fb9,Namespace:calico-system,Attempt:1,} returns sandbox id \"02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa\"" Feb 13 20:29:12.158253 containerd[1967]: time="2025-02-13T20:29:12.157789973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:29:12.280553 kernel: bpftool[3563]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:29:12.639537 systemd-networkd[1807]: vxlan.calico: Link UP Feb 13 20:29:12.639550 systemd-networkd[1807]: vxlan.calico: Gained carrier Feb 13 20:29:12.675054 (udev-worker)[3060]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:29:12.740734 kubelet[2413]: E0213 20:29:12.740687 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:13.615107 systemd-networkd[1807]: cali735add3397c: Gained IPv6LL Feb 13 20:29:13.741185 kubelet[2413]: E0213 20:29:13.740966 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:13.918939 containerd[1967]: time="2025-02-13T20:29:13.918699735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:13.920425 containerd[1967]: time="2025-02-13T20:29:13.920168507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:29:13.923486 containerd[1967]: time="2025-02-13T20:29:13.922782725Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:13.927186 containerd[1967]: time="2025-02-13T20:29:13.927119561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:13.930138 containerd[1967]: time="2025-02-13T20:29:13.929893357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.772057196s" Feb 13 20:29:13.930138 containerd[1967]: time="2025-02-13T20:29:13.929946853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:29:13.933729 containerd[1967]: time="2025-02-13T20:29:13.933689651Z" level=info msg="CreateContainer within sandbox \"02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:29:13.971986 containerd[1967]: time="2025-02-13T20:29:13.971932603Z" level=info msg="CreateContainer within sandbox \"02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3f6414eec759f46322254bc701c45e0aa8d542f6233b9861edf67c86643ba363\"" Feb 13 20:29:13.974607 containerd[1967]: time="2025-02-13T20:29:13.973122269Z" level=info msg="StartContainer for \"3f6414eec759f46322254bc701c45e0aa8d542f6233b9861edf67c86643ba363\"" Feb 13 20:29:14.028697 systemd[1]: Started cri-containerd-3f6414eec759f46322254bc701c45e0aa8d542f6233b9861edf67c86643ba363.scope - libcontainer container 3f6414eec759f46322254bc701c45e0aa8d542f6233b9861edf67c86643ba363. Feb 13 20:29:14.073905 containerd[1967]: time="2025-02-13T20:29:14.073863599Z" level=info msg="StartContainer for \"3f6414eec759f46322254bc701c45e0aa8d542f6233b9861edf67c86643ba363\" returns successfully" Feb 13 20:29:14.075586 containerd[1967]: time="2025-02-13T20:29:14.075545536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:29:14.508666 systemd-networkd[1807]: vxlan.calico: Gained IPv6LL Feb 13 20:29:14.742060 kubelet[2413]: E0213 20:29:14.741999 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:15.058241 containerd[1967]: time="2025-02-13T20:29:15.058178856Z" level=info msg="StopPodSandbox for \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\"" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.187 [INFO][3684] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.188 [INFO][3684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" iface="eth0" netns="/var/run/netns/cni-3fc1bab9-3bae-abc0-0aae-7364ad66c9ab" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.188 [INFO][3684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" iface="eth0" netns="/var/run/netns/cni-3fc1bab9-3bae-abc0-0aae-7364ad66c9ab" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.188 [INFO][3684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" iface="eth0" netns="/var/run/netns/cni-3fc1bab9-3bae-abc0-0aae-7364ad66c9ab" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.188 [INFO][3684] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.188 [INFO][3684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.256 [INFO][3690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.257 [INFO][3690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.257 [INFO][3690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.267 [WARNING][3690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.267 [INFO][3690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.269 [INFO][3690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:15.273330 containerd[1967]: 2025-02-13 20:29:15.270 [INFO][3684] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:15.281374 systemd[1]: run-netns-cni\x2d3fc1bab9\x2d3bae\x2dabc0\x2d0aae\x2d7364ad66c9ab.mount: Deactivated successfully. Feb 13 20:29:15.286862 containerd[1967]: time="2025-02-13T20:29:15.283050507Z" level=info msg="TearDown network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\" successfully" Feb 13 20:29:15.286862 containerd[1967]: time="2025-02-13T20:29:15.283091537Z" level=info msg="StopPodSandbox for \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\" returns successfully" Feb 13 20:29:15.287350 containerd[1967]: time="2025-02-13T20:29:15.287305109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2wrr7,Uid:5697233f-2fb9-4866-b632-351f92190f88,Namespace:default,Attempt:1,}" Feb 13 20:29:15.668470 systemd-networkd[1807]: cali0c312e4706b: Link UP Feb 13 20:29:15.672659 systemd-networkd[1807]: cali0c312e4706b: Gained carrier Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.423 [INFO][3696] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0 nginx-deployment-7fcdb87857- default 5697233f-2fb9-4866-b632-351f92190f88 1114 0 2025-02-13 20:28:59 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.255 nginx-deployment-7fcdb87857-2wrr7 eth0 default [] [] [kns.default ksa.default.default] cali0c312e4706b [] []}} ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Namespace="default" Pod="nginx-deployment-7fcdb87857-2wrr7" WorkloadEndpoint="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.423 [INFO][3696] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Namespace="default" Pod="nginx-deployment-7fcdb87857-2wrr7" WorkloadEndpoint="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.493 [INFO][3707] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" HandleID="k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.521 [INFO][3707] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" HandleID="k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.255", "pod":"nginx-deployment-7fcdb87857-2wrr7", "timestamp":"2025-02-13 20:29:15.493761922 +0000 UTC"}, Hostname:"172.31.17.255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.522 [INFO][3707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.522 [INFO][3707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.522 [INFO][3707] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.255' Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.528 [INFO][3707] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.537 [INFO][3707] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.562 [INFO][3707] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.568 [INFO][3707] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.578 [INFO][3707] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.578 [INFO][3707] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.588 [INFO][3707] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.610 [INFO][3707] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.623 [INFO][3707] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.194/26] block=192.168.126.192/26 handle="k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.623 [INFO][3707] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.194/26] handle="k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" host="172.31.17.255" Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.625 [INFO][3707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:15.705486 containerd[1967]: 2025-02-13 20:29:15.625 [INFO][3707] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.194/26] IPv6=[] ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" HandleID="k8s-pod-network.052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.708358 containerd[1967]: 2025-02-13 20:29:15.631 [INFO][3696] cni-plugin/k8s.go 386: Populated endpoint ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Namespace="default" Pod="nginx-deployment-7fcdb87857-2wrr7" WorkloadEndpoint="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"5697233f-2fb9-4866-b632-351f92190f88", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-2wrr7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0c312e4706b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:15.708358 containerd[1967]: 2025-02-13 20:29:15.631 [INFO][3696] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.194/32] ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Namespace="default" Pod="nginx-deployment-7fcdb87857-2wrr7" WorkloadEndpoint="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.708358 containerd[1967]: 2025-02-13 20:29:15.632 [INFO][3696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c312e4706b ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Namespace="default" Pod="nginx-deployment-7fcdb87857-2wrr7" WorkloadEndpoint="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.708358 containerd[1967]: 2025-02-13 20:29:15.673 [INFO][3696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Namespace="default" Pod="nginx-deployment-7fcdb87857-2wrr7" WorkloadEndpoint="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.708358 containerd[1967]: 2025-02-13 20:29:15.674 [INFO][3696] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Namespace="default" Pod="nginx-deployment-7fcdb87857-2wrr7" WorkloadEndpoint="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"5697233f-2fb9-4866-b632-351f92190f88", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff", Pod:"nginx-deployment-7fcdb87857-2wrr7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0c312e4706b", MAC:"6a:32:b3:df:d7:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:15.708358 containerd[1967]: 2025-02-13 20:29:15.698 [INFO][3696] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff" Namespace="default" Pod="nginx-deployment-7fcdb87857-2wrr7" WorkloadEndpoint="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:15.742601 kubelet[2413]: E0213 20:29:15.742561 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:15.815786 containerd[1967]: time="2025-02-13T20:29:15.815690329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:29:15.815975 containerd[1967]: time="2025-02-13T20:29:15.815757407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:29:15.815975 containerd[1967]: time="2025-02-13T20:29:15.815779925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:29:15.815975 containerd[1967]: time="2025-02-13T20:29:15.815885738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:29:15.877304 systemd[1]: Started cri-containerd-052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff.scope - libcontainer container 052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff. Feb 13 20:29:15.994817 containerd[1967]: time="2025-02-13T20:29:15.992405695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2wrr7,Uid:5697233f-2fb9-4866-b632-351f92190f88,Namespace:default,Attempt:1,} returns sandbox id \"052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff\"" Feb 13 20:29:16.036173 containerd[1967]: time="2025-02-13T20:29:16.036114466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:16.039056 containerd[1967]: time="2025-02-13T20:29:16.038375356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:29:16.040469 containerd[1967]: time="2025-02-13T20:29:16.040396149Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:16.044339 containerd[1967]: time="2025-02-13T20:29:16.043897971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:16.047749 containerd[1967]: time="2025-02-13T20:29:16.046592875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.971005782s" Feb 13 20:29:16.047749 containerd[1967]: time="2025-02-13T20:29:16.046644393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:29:16.053051 containerd[1967]: time="2025-02-13T20:29:16.053015631Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:29:16.055955 containerd[1967]: time="2025-02-13T20:29:16.055177151Z" level=info msg="CreateContainer within sandbox \"02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:29:16.078652 containerd[1967]: time="2025-02-13T20:29:16.078601873Z" level=info msg="CreateContainer within sandbox \"02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"20c6131dfb9d848d4aa0a749880bad0d6c47ee57bc24045dce12ccfa61b14df6\"" Feb 13 20:29:16.079642 containerd[1967]: time="2025-02-13T20:29:16.079608769Z" level=info msg="StartContainer for \"20c6131dfb9d848d4aa0a749880bad0d6c47ee57bc24045dce12ccfa61b14df6\"" Feb 13 20:29:16.126852 systemd[1]: Started cri-containerd-20c6131dfb9d848d4aa0a749880bad0d6c47ee57bc24045dce12ccfa61b14df6.scope - libcontainer container 20c6131dfb9d848d4aa0a749880bad0d6c47ee57bc24045dce12ccfa61b14df6. Feb 13 20:29:16.171550 containerd[1967]: time="2025-02-13T20:29:16.171459930Z" level=info msg="StartContainer for \"20c6131dfb9d848d4aa0a749880bad0d6c47ee57bc24045dce12ccfa61b14df6\" returns successfully" Feb 13 20:29:16.395123 kubelet[2413]: I0213 20:29:16.395047 2413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gvmbv" podStartSLOduration=33.501945513 podStartE2EDuration="37.395028914s" podCreationTimestamp="2025-02-13 20:28:39 +0000 UTC" firstStartedPulling="2025-02-13 20:29:12.157324688 +0000 UTC m=+34.763250913" lastFinishedPulling="2025-02-13 20:29:16.050408078 +0000 UTC m=+38.656334314" observedRunningTime="2025-02-13 20:29:16.394780023 +0000 UTC m=+39.000706269" watchObservedRunningTime="2025-02-13 20:29:16.395028914 +0000 UTC m=+39.000955158" Feb 13 20:29:16.743946 kubelet[2413]: E0213 20:29:16.743807 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:16.875794 kubelet[2413]: I0213 20:29:16.875709 2413 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:29:16.875794 kubelet[2413]: I0213 20:29:16.875803 2413 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:29:17.068884 systemd-networkd[1807]: cali0c312e4706b: Gained IPv6LL Feb 13 20:29:17.743987 kubelet[2413]: E0213 20:29:17.743916 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:18.683834 kubelet[2413]: E0213 20:29:18.683765 2413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:18.744609 kubelet[2413]: E0213 20:29:18.744207 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:19.491290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330560067.mount: Deactivated successfully. Feb 13 20:29:19.692897 ntpd[1937]: Listen normally on 7 vxlan.calico 192.168.126.192:123 Feb 13 20:29:19.693754 ntpd[1937]: 13 Feb 20:29:19 ntpd[1937]: Listen normally on 7 vxlan.calico 192.168.126.192:123 Feb 13 20:29:19.693754 ntpd[1937]: 13 Feb 20:29:19 ntpd[1937]: Listen normally on 8 cali735add3397c [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 20:29:19.693754 ntpd[1937]: 13 Feb 20:29:19 ntpd[1937]: Listen normally on 9 vxlan.calico [fe80::64b7:a3ff:fecd:ec4%4]:123 Feb 13 20:29:19.693754 ntpd[1937]: 13 Feb 20:29:19 ntpd[1937]: Listen normally on 10 cali0c312e4706b [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 20:29:19.692986 ntpd[1937]: Listen normally on 8 cali735add3397c [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 20:29:19.693041 ntpd[1937]: Listen normally on 9 vxlan.calico [fe80::64b7:a3ff:fecd:ec4%4]:123 Feb 13 20:29:19.693100 ntpd[1937]: Listen normally on 10 cali0c312e4706b [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 20:29:19.744955 kubelet[2413]: E0213 20:29:19.744715 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:20.744877 kubelet[2413]: E0213 20:29:20.744836 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:21.536131 containerd[1967]: time="2025-02-13T20:29:21.536062716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:21.538060 containerd[1967]: time="2025-02-13T20:29:21.537867528Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 20:29:21.538060 containerd[1967]: time="2025-02-13T20:29:21.538011907Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:21.571940 containerd[1967]: time="2025-02-13T20:29:21.571385845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:21.572851 containerd[1967]: time="2025-02-13T20:29:21.572808795Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.519749472s" Feb 13 20:29:21.573007 containerd[1967]: time="2025-02-13T20:29:21.572985665Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:29:21.645830 containerd[1967]: time="2025-02-13T20:29:21.645792736Z" level=info msg="CreateContainer within sandbox \"052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 20:29:21.694689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344132897.mount: Deactivated successfully. Feb 13 20:29:21.697674 containerd[1967]: time="2025-02-13T20:29:21.697632642Z" level=info msg="CreateContainer within sandbox \"052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fbe13d5d39875b914fed56765f21d09ea28782d6c782723a05411f27e93ba474\"" Feb 13 20:29:21.698526 containerd[1967]: time="2025-02-13T20:29:21.698246379Z" level=info msg="StartContainer for \"fbe13d5d39875b914fed56765f21d09ea28782d6c782723a05411f27e93ba474\"" Feb 13 20:29:21.740698 systemd[1]: Started cri-containerd-fbe13d5d39875b914fed56765f21d09ea28782d6c782723a05411f27e93ba474.scope - libcontainer container fbe13d5d39875b914fed56765f21d09ea28782d6c782723a05411f27e93ba474. Feb 13 20:29:21.745720 kubelet[2413]: E0213 20:29:21.745675 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:21.772047 containerd[1967]: time="2025-02-13T20:29:21.771999942Z" level=info msg="StartContainer for \"fbe13d5d39875b914fed56765f21d09ea28782d6c782723a05411f27e93ba474\" returns successfully" Feb 13 20:29:22.410771 kubelet[2413]: I0213 20:29:22.410703 2413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-2wrr7" podStartSLOduration=17.835385212 podStartE2EDuration="23.410683209s" podCreationTimestamp="2025-02-13 20:28:59 +0000 UTC" firstStartedPulling="2025-02-13 20:29:15.99946816 +0000 UTC m=+38.605394392" lastFinishedPulling="2025-02-13 20:29:21.574766163 +0000 UTC m=+44.180692389" observedRunningTime="2025-02-13 20:29:22.409403235 +0000 UTC m=+45.015329480" watchObservedRunningTime="2025-02-13 20:29:22.410683209 +0000 UTC m=+45.016609452" Feb 13 20:29:22.746449 kubelet[2413]: E0213 20:29:22.746297 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:23.749136 kubelet[2413]: E0213 20:29:23.749072 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:24.750285 kubelet[2413]: E0213 20:29:24.750241 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:25.750992 kubelet[2413]: E0213 20:29:25.750936 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:26.751441 kubelet[2413]: E0213 20:29:26.751380 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:27.751945 kubelet[2413]: E0213 20:29:27.751898 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:28.752567 kubelet[2413]: E0213 20:29:28.752520 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:29.753316 kubelet[2413]: E0213 20:29:29.753257 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:30.474403 systemd[1]: Created slice kubepods-besteffort-pod0f427652_715e_4bbc_8626_4908ddb09e89.slice - libcontainer container kubepods-besteffort-pod0f427652_715e_4bbc_8626_4908ddb09e89.slice. Feb 13 20:29:30.664107 kubelet[2413]: I0213 20:29:30.664059 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0f427652-715e-4bbc-8626-4908ddb09e89-data\") pod \"nfs-server-provisioner-0\" (UID: \"0f427652-715e-4bbc-8626-4908ddb09e89\") " pod="default/nfs-server-provisioner-0" Feb 13 20:29:30.664274 kubelet[2413]: I0213 20:29:30.664139 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n68z\" (UniqueName: \"kubernetes.io/projected/0f427652-715e-4bbc-8626-4908ddb09e89-kube-api-access-9n68z\") pod \"nfs-server-provisioner-0\" (UID: \"0f427652-715e-4bbc-8626-4908ddb09e89\") " pod="default/nfs-server-provisioner-0" Feb 13 20:29:30.753535 kubelet[2413]: E0213 20:29:30.753397 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:30.785898 containerd[1967]: time="2025-02-13T20:29:30.784867169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0f427652-715e-4bbc-8626-4908ddb09e89,Namespace:default,Attempt:0,}" Feb 13 20:29:31.079142 (udev-worker)[3914]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:29:31.080370 systemd-networkd[1807]: cali60e51b789ff: Link UP Feb 13 20:29:31.082937 systemd-networkd[1807]: cali60e51b789ff: Gained carrier Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:30.969 [INFO][3919] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.255-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 0f427652-715e-4bbc-8626-4908ddb09e89 1179 0 2025-02-13 20:29:30 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.17.255 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.255-k8s-nfs--server--provisioner--0-" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:30.969 [INFO][3919] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.007 [INFO][3930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" HandleID="k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Workload="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.023 [INFO][3930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" HandleID="k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Workload="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba630), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.255", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 20:29:31.007820442 +0000 UTC"}, Hostname:"172.31.17.255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.023 [INFO][3930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.023 [INFO][3930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.023 [INFO][3930] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.255' Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.026 [INFO][3930] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.032 [INFO][3930] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.039 [INFO][3930] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.041 [INFO][3930] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.045 [INFO][3930] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.045 [INFO][3930] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.048 [INFO][3930] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6 Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.058 [INFO][3930] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.074 [INFO][3930] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.195/26] block=192.168.126.192/26 handle="k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.074 [INFO][3930] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.195/26] handle="k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" host="172.31.17.255" Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.074 [INFO][3930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:31.102796 containerd[1967]: 2025-02-13 20:29:31.074 [INFO][3930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.195/26] IPv6=[] ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" HandleID="k8s-pod-network.8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Workload="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:29:31.103981 containerd[1967]: 2025-02-13 20:29:31.076 [INFO][3919] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0f427652-715e-4bbc-8626-4908ddb09e89", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.126.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:31.103981 containerd[1967]: 2025-02-13 20:29:31.076 [INFO][3919] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.195/32] ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:29:31.103981 containerd[1967]: 2025-02-13 20:29:31.076 [INFO][3919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:29:31.103981 containerd[1967]: 2025-02-13 20:29:31.083 [INFO][3919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:29:31.104279 containerd[1967]: 2025-02-13 20:29:31.084 [INFO][3919] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0f427652-715e-4bbc-8626-4908ddb09e89", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.126.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"32:e1:5e:08:7c:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:31.104279 containerd[1967]: 2025-02-13 20:29:31.100 [INFO][3919] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.255-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:29:31.136278 containerd[1967]: time="2025-02-13T20:29:31.136157116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:29:31.136694 containerd[1967]: time="2025-02-13T20:29:31.136232484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:29:31.138416 containerd[1967]: time="2025-02-13T20:29:31.136659657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:29:31.138416 containerd[1967]: time="2025-02-13T20:29:31.138290883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:29:31.174678 systemd[1]: Started cri-containerd-8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6.scope - libcontainer container 8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6. Feb 13 20:29:31.273144 containerd[1967]: time="2025-02-13T20:29:31.273054809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0f427652-715e-4bbc-8626-4908ddb09e89,Namespace:default,Attempt:0,} returns sandbox id \"8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6\"" Feb 13 20:29:31.275354 containerd[1967]: time="2025-02-13T20:29:31.275118197Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 20:29:31.754513 kubelet[2413]: E0213 20:29:31.754449 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:32.755157 kubelet[2413]: E0213 20:29:32.755054 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:33.005465 systemd-networkd[1807]: cali60e51b789ff: Gained IPv6LL Feb 13 20:29:33.755643 kubelet[2413]: E0213 20:29:33.755588 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:34.756369 kubelet[2413]: E0213 20:29:34.756309 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:35.617032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648997442.mount: Deactivated successfully. Feb 13 20:29:35.693134 ntpd[1937]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 20:29:35.693570 ntpd[1937]: 13 Feb 20:29:35 ntpd[1937]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 20:29:35.757230 kubelet[2413]: E0213 20:29:35.757186 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:36.758329 kubelet[2413]: E0213 20:29:36.758287 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:37.759955 kubelet[2413]: E0213 20:29:37.759856 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:38.712310 kubelet[2413]: E0213 20:29:38.711945 2413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:38.789759 kubelet[2413]: E0213 20:29:38.789099 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:39.179334 containerd[1967]: time="2025-02-13T20:29:39.179277201Z" level=info msg="StopPodSandbox for \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\"" Feb 13 20:29:39.407368 containerd[1967]: time="2025-02-13T20:29:39.407242950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:39.409469 containerd[1967]: time="2025-02-13T20:29:39.409226398Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 20:29:39.412253 containerd[1967]: time="2025-02-13T20:29:39.412198165Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:39.416938 containerd[1967]: time="2025-02-13T20:29:39.416866818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:29:39.420862 containerd[1967]: time="2025-02-13T20:29:39.420808705Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 8.14512901s" Feb 13 20:29:39.421677 containerd[1967]: time="2025-02-13T20:29:39.421270020Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 20:29:39.449280 containerd[1967]: time="2025-02-13T20:29:39.448184814Z" level=info msg="CreateContainer within sandbox \"8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 20:29:39.489114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664760092.mount: Deactivated successfully. Feb 13 20:29:39.499778 containerd[1967]: time="2025-02-13T20:29:39.498983363Z" level=info msg="CreateContainer within sandbox \"8fbd3820bdb840897ed55fba74c353324df212eab686252414dc6c014ada9bc6\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7c92e600206c16c44e575091977a9cd2eb2db3ced555a6ff389929a39144961b\"" Feb 13 20:29:39.503495 containerd[1967]: time="2025-02-13T20:29:39.503360279Z" level=info msg="StartContainer for \"7c92e600206c16c44e575091977a9cd2eb2db3ced555a6ff389929a39144961b\"" Feb 13 20:29:39.604058 systemd[1]: run-containerd-runc-k8s.io-7c92e600206c16c44e575091977a9cd2eb2db3ced555a6ff389929a39144961b-runc.akdDeB.mount: Deactivated successfully. Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.415 [WARNING][4049] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-csi--node--driver--gvmbv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb98fc29-084d-4742-a951-d1e39bf46fb9", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 28, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa", Pod:"csi-node-driver-gvmbv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali735add3397c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.415 [INFO][4049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.415 [INFO][4049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" iface="eth0" netns="" Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.415 [INFO][4049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.415 [INFO][4049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.509 [INFO][4055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.512 [INFO][4055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.512 [INFO][4055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.588 [WARNING][4055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.588 [INFO][4055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.591 [INFO][4055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:39.610040 containerd[1967]: 2025-02-13 20:29:39.606 [INFO][4049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:39.610966 containerd[1967]: time="2025-02-13T20:29:39.610085519Z" level=info msg="TearDown network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\" successfully" Feb 13 20:29:39.610966 containerd[1967]: time="2025-02-13T20:29:39.610117106Z" level=info msg="StopPodSandbox for \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\" returns successfully" Feb 13 20:29:39.620699 systemd[1]: Started cri-containerd-7c92e600206c16c44e575091977a9cd2eb2db3ced555a6ff389929a39144961b.scope - libcontainer container 7c92e600206c16c44e575091977a9cd2eb2db3ced555a6ff389929a39144961b. Feb 13 20:29:39.666876 containerd[1967]: time="2025-02-13T20:29:39.665660048Z" level=info msg="RemovePodSandbox for \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\"" Feb 13 20:29:39.666876 containerd[1967]: time="2025-02-13T20:29:39.665716644Z" level=info msg="Forcibly stopping sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\"" Feb 13 20:29:39.729131 containerd[1967]: time="2025-02-13T20:29:39.729014375Z" level=info msg="StartContainer for \"7c92e600206c16c44e575091977a9cd2eb2db3ced555a6ff389929a39144961b\" returns successfully" Feb 13 20:29:39.790604 kubelet[2413]: E0213 20:29:39.790555 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.814 [WARNING][4098] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-csi--node--driver--gvmbv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb98fc29-084d-4742-a951-d1e39bf46fb9", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 28, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"02f23f7ab95f42dd8d9d5ac6435247c03ef7a86bd2dcd6c901c8790c99d685fa", Pod:"csi-node-driver-gvmbv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali735add3397c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.815 [INFO][4098] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.817 [INFO][4098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" iface="eth0" netns="" Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.817 [INFO][4098] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.817 [INFO][4098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.902 [INFO][4118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.904 [INFO][4118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.904 [INFO][4118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.917 [WARNING][4118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.917 [INFO][4118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" HandleID="k8s-pod-network.f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Workload="172.31.17.255-k8s-csi--node--driver--gvmbv-eth0" Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.920 [INFO][4118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:39.933712 containerd[1967]: 2025-02-13 20:29:39.925 [INFO][4098] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e" Feb 13 20:29:39.937765 containerd[1967]: time="2025-02-13T20:29:39.933789263Z" level=info msg="TearDown network for sandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\" successfully" Feb 13 20:29:40.029215 containerd[1967]: time="2025-02-13T20:29:40.029069226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:29:40.029215 containerd[1967]: time="2025-02-13T20:29:40.029181817Z" level=info msg="RemovePodSandbox \"f1d8f3ed621e3bda4a3f1739d9d23ee4219f1575f06e1f9b3c44f208b74cb25e\" returns successfully" Feb 13 20:29:40.035509 containerd[1967]: time="2025-02-13T20:29:40.035464976Z" level=info msg="StopPodSandbox for \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\"" Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.185 [WARNING][4141] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"5697233f-2fb9-4866-b632-351f92190f88", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff", Pod:"nginx-deployment-7fcdb87857-2wrr7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0c312e4706b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.186 [INFO][4141] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.186 [INFO][4141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" iface="eth0" netns="" Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.186 [INFO][4141] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.186 [INFO][4141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.262 [INFO][4153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.264 [INFO][4153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.264 [INFO][4153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.275 [WARNING][4153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.275 [INFO][4153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.283 [INFO][4153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:40.288324 containerd[1967]: 2025-02-13 20:29:40.285 [INFO][4141] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:40.288324 containerd[1967]: time="2025-02-13T20:29:40.287884905Z" level=info msg="TearDown network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\" successfully" Feb 13 20:29:40.288324 containerd[1967]: time="2025-02-13T20:29:40.287979328Z" level=info msg="StopPodSandbox for \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\" returns successfully" Feb 13 20:29:40.290471 containerd[1967]: time="2025-02-13T20:29:40.290003515Z" level=info msg="RemovePodSandbox for \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\"" Feb 13 20:29:40.290471 containerd[1967]: time="2025-02-13T20:29:40.290295595Z" level=info msg="Forcibly stopping sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\"" Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.368 [WARNING][4171] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"5697233f-2fb9-4866-b632-351f92190f88", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"052be35cc4f4bf3f6718f9c5f04b8d34d480cc42b84b24d3fb9e5ae9124cccff", Pod:"nginx-deployment-7fcdb87857-2wrr7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali0c312e4706b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.373 [INFO][4171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.373 [INFO][4171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" iface="eth0" netns="" Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.373 [INFO][4171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.373 [INFO][4171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.415 [INFO][4177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.416 [INFO][4177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.416 [INFO][4177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.428 [WARNING][4177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.428 [INFO][4177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" HandleID="k8s-pod-network.f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Workload="172.31.17.255-k8s-nginx--deployment--7fcdb87857--2wrr7-eth0" Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.432 [INFO][4177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:29:40.436768 containerd[1967]: 2025-02-13 20:29:40.433 [INFO][4171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c" Feb 13 20:29:40.438024 containerd[1967]: time="2025-02-13T20:29:40.436953518Z" level=info msg="TearDown network for sandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\" successfully" Feb 13 20:29:40.442754 containerd[1967]: time="2025-02-13T20:29:40.442612286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:29:40.443011 containerd[1967]: time="2025-02-13T20:29:40.442762684Z" level=info msg="RemovePodSandbox \"f283c79ebfef4b75cf02f44950aabec44aab0c1b84009bc108ec2e2c0fa0f32c\" returns successfully" Feb 13 20:29:40.634580 kubelet[2413]: I0213 20:29:40.634514 2413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.486107527 podStartE2EDuration="10.634497822s" podCreationTimestamp="2025-02-13 20:29:30 +0000 UTC" firstStartedPulling="2025-02-13 20:29:31.274849632 +0000 UTC m=+53.880775868" lastFinishedPulling="2025-02-13 20:29:39.423239926 +0000 UTC m=+62.029166163" observedRunningTime="2025-02-13 20:29:40.634119647 +0000 UTC m=+63.240045888" watchObservedRunningTime="2025-02-13 20:29:40.634497822 +0000 UTC m=+63.240424063" Feb 13 20:29:40.791573 kubelet[2413]: E0213 20:29:40.791511 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:41.792866 kubelet[2413]: E0213 20:29:41.792667 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:42.793453 kubelet[2413]: E0213 20:29:42.793387 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:43.794306 kubelet[2413]: E0213 20:29:43.794250 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:44.802520 kubelet[2413]: E0213 20:29:44.802150 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:45.802515 kubelet[2413]: E0213 20:29:45.802457 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:46.803404 kubelet[2413]: E0213 20:29:46.803291 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:47.804472 kubelet[2413]: E0213 20:29:47.804414 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:48.805101 kubelet[2413]: E0213 20:29:48.805045 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:49.805282 kubelet[2413]: E0213 20:29:49.805227 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:50.806574 kubelet[2413]: E0213 20:29:50.806230 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:51.807263 kubelet[2413]: E0213 20:29:51.807210 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:52.808256 kubelet[2413]: E0213 20:29:52.808210 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:53.809274 kubelet[2413]: E0213 20:29:53.809217 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:54.809605 kubelet[2413]: E0213 20:29:54.809521 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:55.810580 kubelet[2413]: E0213 20:29:55.810524 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:56.811472 kubelet[2413]: E0213 20:29:56.811404 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:57.814330 kubelet[2413]: E0213 20:29:57.814271 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:58.683897 kubelet[2413]: E0213 20:29:58.683841 2413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:58.814993 kubelet[2413]: E0213 20:29:58.814934 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:29:59.816113 kubelet[2413]: E0213 20:29:59.816050 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:00.817102 kubelet[2413]: E0213 20:30:00.817049 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:01.817856 kubelet[2413]: E0213 20:30:01.817795 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:02.818040 kubelet[2413]: E0213 20:30:02.817982 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:03.818472 kubelet[2413]: E0213 20:30:03.818399 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:04.808752 systemd[1]: Created slice kubepods-besteffort-pod17afbac4_cb21_4ba3_b071_18c7d2ea2484.slice - libcontainer container kubepods-besteffort-pod17afbac4_cb21_4ba3_b071_18c7d2ea2484.slice. Feb 13 20:30:04.819461 kubelet[2413]: E0213 20:30:04.819381 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:04.844495 kubelet[2413]: I0213 20:30:04.841739 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e043febd-2309-4089-acd9-25fc6ac32961\" (UniqueName: \"kubernetes.io/nfs/17afbac4-cb21-4ba3-b071-18c7d2ea2484-pvc-e043febd-2309-4089-acd9-25fc6ac32961\") pod \"test-pod-1\" (UID: \"17afbac4-cb21-4ba3-b071-18c7d2ea2484\") " pod="default/test-pod-1" Feb 13 20:30:04.844770 kubelet[2413]: I0213 20:30:04.844636 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwhq\" (UniqueName: \"kubernetes.io/projected/17afbac4-cb21-4ba3-b071-18c7d2ea2484-kube-api-access-rrwhq\") pod \"test-pod-1\" (UID: \"17afbac4-cb21-4ba3-b071-18c7d2ea2484\") " pod="default/test-pod-1" Feb 13 20:30:05.132476 kernel: FS-Cache: Loaded Feb 13 20:30:05.492020 kernel: RPC: Registered named UNIX socket transport module. Feb 13 20:30:05.492322 kernel: RPC: Registered udp transport module. Feb 13 20:30:05.492458 kernel: RPC: Registered tcp transport module. Feb 13 20:30:05.493075 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 20:30:05.498813 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 20:30:05.821111 kubelet[2413]: E0213 20:30:05.820909 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:06.620208 kernel: NFS: Registering the id_resolver key type Feb 13 20:30:06.620366 kernel: Key type id_resolver registered Feb 13 20:30:06.620402 kernel: Key type id_legacy registered Feb 13 20:30:06.704107 nfsidmap[4250]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 20:30:06.713178 nfsidmap[4251]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 20:30:06.821312 kubelet[2413]: E0213 20:30:06.821265 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:06.943301 containerd[1967]: time="2025-02-13T20:30:06.943151308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:17afbac4-cb21-4ba3-b071-18c7d2ea2484,Namespace:default,Attempt:0,}" Feb 13 20:30:07.229274 (udev-worker)[4237]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:30:07.233748 systemd-networkd[1807]: cali5ec59c6bf6e: Link UP Feb 13 20:30:07.233971 systemd-networkd[1807]: cali5ec59c6bf6e: Gained carrier Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.061 [INFO][4252] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.255-k8s-test--pod--1-eth0 default 17afbac4-cb21-4ba3-b071-18c7d2ea2484 1288 0 2025-02-13 20:29:31 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.255 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.255-k8s-test--pod--1-" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.061 [INFO][4252] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.255-k8s-test--pod--1-eth0" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.102 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" HandleID="k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Workload="172.31.17.255-k8s-test--pod--1-eth0" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.118 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" HandleID="k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Workload="172.31.17.255-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b70), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.255", "pod":"test-pod-1", "timestamp":"2025-02-13 20:30:07.102490972 +0000 UTC"}, Hostname:"172.31.17.255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.118 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.118 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.118 [INFO][4263] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.255' Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.125 [INFO][4263] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.131 [INFO][4263] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.143 [INFO][4263] ipam/ipam.go 489: Trying affinity for 192.168.126.192/26 host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.159 [INFO][4263] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.192/26 host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.179 [INFO][4263] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.192/26 host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.179 [INFO][4263] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.192/26 handle="k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.186 [INFO][4263] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4 Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.198 [INFO][4263] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.192/26 handle="k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.217 [INFO][4263] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.196/26] block=192.168.126.192/26 handle="k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.217 [INFO][4263] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.196/26] handle="k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" host="172.31.17.255" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.217 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.218 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.196/26] IPv6=[] ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" HandleID="k8s-pod-network.14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Workload="172.31.17.255-k8s-test--pod--1-eth0" Feb 13 20:30:07.260263 containerd[1967]: 2025-02-13 20:30:07.222 [INFO][4252] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.255-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"17afbac4-cb21-4ba3-b071-18c7d2ea2484", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:30:07.266491 containerd[1967]: 2025-02-13 20:30:07.222 [INFO][4252] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.196/32] ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.255-k8s-test--pod--1-eth0" Feb 13 20:30:07.266491 containerd[1967]: 2025-02-13 20:30:07.222 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.255-k8s-test--pod--1-eth0" Feb 13 20:30:07.266491 containerd[1967]: 2025-02-13 20:30:07.235 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.255-k8s-test--pod--1-eth0" Feb 13 20:30:07.266491 containerd[1967]: 2025-02-13 20:30:07.239 [INFO][4252] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.255-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.255-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"17afbac4-cb21-4ba3-b071-18c7d2ea2484", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.255", ContainerID:"14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"9e:fb:7a:f0:6f:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:30:07.266491 containerd[1967]: 2025-02-13 20:30:07.253 [INFO][4252] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.255-k8s-test--pod--1-eth0" Feb 13 20:30:07.403131 containerd[1967]: time="2025-02-13T20:30:07.402701419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:07.403131 containerd[1967]: time="2025-02-13T20:30:07.403046003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:07.403131 containerd[1967]: time="2025-02-13T20:30:07.403066305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:07.403597 containerd[1967]: time="2025-02-13T20:30:07.403193445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:07.458790 systemd[1]: Started cri-containerd-14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4.scope - libcontainer container 14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4. Feb 13 20:30:07.523575 containerd[1967]: time="2025-02-13T20:30:07.521075153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:17afbac4-cb21-4ba3-b071-18c7d2ea2484,Namespace:default,Attempt:0,} returns sandbox id \"14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4\"" Feb 13 20:30:07.523942 containerd[1967]: time="2025-02-13T20:30:07.523905664Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:30:07.720694 systemd[1]: run-containerd-runc-k8s.io-14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4-runc.ZhmqxP.mount: Deactivated successfully. Feb 13 20:30:07.822552 kubelet[2413]: E0213 20:30:07.822411 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:08.031165 containerd[1967]: time="2025-02-13T20:30:08.031097949Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:08.033634 containerd[1967]: time="2025-02-13T20:30:08.032977392Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 20:30:08.039683 containerd[1967]: time="2025-02-13T20:30:08.039570586Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 515.613677ms" Feb 13 20:30:08.039683 containerd[1967]: time="2025-02-13T20:30:08.039685550Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:30:08.051461 containerd[1967]: time="2025-02-13T20:30:08.050279106Z" level=info msg="CreateContainer within sandbox \"14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 20:30:08.128744 containerd[1967]: time="2025-02-13T20:30:08.128688101Z" level=info msg="CreateContainer within sandbox \"14bc73c1d008571b34e61ce85feeb3c40442a108972620e1e520b1e7b7da81f4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"96bc2765305162c919dfbf5cb9dfcd8a0f8909e49781a6ee3d54d0f1060b5b64\"" Feb 13 20:30:08.129574 containerd[1967]: time="2025-02-13T20:30:08.129534975Z" level=info msg="StartContainer for \"96bc2765305162c919dfbf5cb9dfcd8a0f8909e49781a6ee3d54d0f1060b5b64\"" Feb 13 20:30:08.292556 systemd[1]: Started cri-containerd-96bc2765305162c919dfbf5cb9dfcd8a0f8909e49781a6ee3d54d0f1060b5b64.scope - libcontainer container 96bc2765305162c919dfbf5cb9dfcd8a0f8909e49781a6ee3d54d0f1060b5b64. Feb 13 20:30:08.355949 containerd[1967]: time="2025-02-13T20:30:08.355900019Z" level=info msg="StartContainer for \"96bc2765305162c919dfbf5cb9dfcd8a0f8909e49781a6ee3d54d0f1060b5b64\" returns successfully" Feb 13 20:30:08.723888 systemd[1]: run-containerd-runc-k8s.io-96bc2765305162c919dfbf5cb9dfcd8a0f8909e49781a6ee3d54d0f1060b5b64-runc.JXPjOB.mount: Deactivated successfully. Feb 13 20:30:08.823006 kubelet[2413]: E0213 20:30:08.822933 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:08.908817 systemd-networkd[1807]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 20:30:09.823242 kubelet[2413]: E0213 20:30:09.823185 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:10.824203 kubelet[2413]: E0213 20:30:10.824159 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:11.372311 systemd[1]: run-containerd-runc-k8s.io-514ad722e5a8fcde4ed6297c93a912fcdcea4f58c71cb2ae78ca63c4c5df5034-runc.eugIB3.mount: Deactivated successfully. Feb 13 20:30:11.692502 ntpd[1937]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 20:30:11.693005 ntpd[1937]: 13 Feb 20:30:11 ntpd[1937]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 20:30:11.824851 kubelet[2413]: E0213 20:30:11.824793 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:12.825758 kubelet[2413]: E0213 20:30:12.825703 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:13.826653 kubelet[2413]: E0213 20:30:13.826601 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:14.826936 kubelet[2413]: E0213 20:30:14.826880 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:15.827124 kubelet[2413]: E0213 20:30:15.827073 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:16.827720 kubelet[2413]: E0213 20:30:16.827661 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:17.829815 kubelet[2413]: E0213 20:30:17.827974 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:18.683485 kubelet[2413]: E0213 20:30:18.683415 2413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:18.829256 kubelet[2413]: E0213 20:30:18.829196 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:19.830092 kubelet[2413]: E0213 20:30:19.830034 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:20.830481 kubelet[2413]: E0213 20:30:20.830414 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:21.830819 kubelet[2413]: E0213 20:30:21.830774 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:22.830940 kubelet[2413]: E0213 20:30:22.830884 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:23.831847 kubelet[2413]: E0213 20:30:23.831788 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:24.832047 kubelet[2413]: E0213 20:30:24.831992 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:25.832578 kubelet[2413]: E0213 20:30:25.832519 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:26.833346 kubelet[2413]: E0213 20:30:26.833291 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:27.834474 kubelet[2413]: E0213 20:30:27.834409 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:28.835314 kubelet[2413]: E0213 20:30:28.835201 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:29.836405 kubelet[2413]: E0213 20:30:29.836363 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:30.837233 kubelet[2413]: E0213 20:30:30.837173 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:31.356705 kubelet[2413]: E0213 20:30:31.356636 2413 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:30:31.837564 kubelet[2413]: E0213 20:30:31.837511 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:32.838520 kubelet[2413]: E0213 20:30:32.838462 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:33.838717 kubelet[2413]: E0213 20:30:33.838658 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:34.839861 kubelet[2413]: E0213 20:30:34.839777 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:35.840330 kubelet[2413]: E0213 20:30:35.840275 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:36.840711 kubelet[2413]: E0213 20:30:36.840653 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:37.841205 kubelet[2413]: E0213 20:30:37.841151 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:38.683523 kubelet[2413]: E0213 20:30:38.683474 2413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:38.842300 kubelet[2413]: E0213 20:30:38.842242 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:39.842506 kubelet[2413]: E0213 20:30:39.842385 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:40.843184 kubelet[2413]: E0213 20:30:40.843136 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:41.358589 kubelet[2413]: E0213 20:30:41.358387 2413 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:30:41.847005 kubelet[2413]: E0213 20:30:41.846947 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:42.847886 kubelet[2413]: E0213 20:30:42.847839 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:43.848228 kubelet[2413]: E0213 20:30:43.848175 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:44.848904 kubelet[2413]: E0213 20:30:44.848838 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:45.849500 kubelet[2413]: E0213 20:30:45.849445 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:46.850585 kubelet[2413]: E0213 20:30:46.850526 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:47.851128 kubelet[2413]: E0213 20:30:47.851085 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:48.851828 kubelet[2413]: E0213 20:30:48.851781 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:49.852509 kubelet[2413]: E0213 20:30:49.852469 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:50.853156 kubelet[2413]: E0213 20:30:50.853108 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:51.362122 kubelet[2413]: E0213 20:30:51.361874 2413 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:30:51.853508 kubelet[2413]: E0213 20:30:51.853453 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:52.853954 kubelet[2413]: E0213 20:30:52.853917 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:53.854796 kubelet[2413]: E0213 20:30:53.854741 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:54.855259 kubelet[2413]: E0213 20:30:54.855205 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:55.856252 kubelet[2413]: E0213 20:30:55.856204 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:56.856800 kubelet[2413]: E0213 20:30:56.856747 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:57.583331 kubelet[2413]: E0213 20:30:57.582944 2413 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": unexpected EOF" Feb 13 20:30:57.621213 kubelet[2413]: E0213 20:30:57.620121 2413 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": read tcp 172.31.17.255:44680->172.31.24.43:6443: read: connection reset by peer" Feb 13 20:30:57.651698 kubelet[2413]: I0213 20:30:57.651643 2413 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 20:30:57.653616 kubelet[2413]: E0213 20:30:57.652896 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": dial tcp 172.31.24.43:6443: connect: connection refused" interval="200ms" Feb 13 20:30:57.854337 kubelet[2413]: E0213 20:30:57.853934 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": dial tcp 172.31.24.43:6443: connect: connection refused" interval="400ms" Feb 13 20:30:57.857827 kubelet[2413]: E0213 20:30:57.857786 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:58.255459 kubelet[2413]: E0213 20:30:58.255328 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": dial tcp 172.31.24.43:6443: connect: connection refused" interval="800ms" Feb 13 20:30:58.683821 kubelet[2413]: E0213 20:30:58.683746 2413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:58.858933 kubelet[2413]: E0213 20:30:58.858881 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:30:59.859111 kubelet[2413]: E0213 20:30:59.859076 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:00.859964 kubelet[2413]: E0213 20:31:00.859885 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:01.861398 kubelet[2413]: E0213 20:31:01.861336 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:02.862418 kubelet[2413]: E0213 20:31:02.862368 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:03.863185 kubelet[2413]: E0213 20:31:03.863133 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:04.864229 kubelet[2413]: E0213 20:31:04.864095 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:05.864991 kubelet[2413]: E0213 20:31:05.864929 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:06.865703 kubelet[2413]: E0213 20:31:06.865439 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:07.867593 kubelet[2413]: E0213 20:31:07.867219 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:08.868027 kubelet[2413]: E0213 20:31:08.867865 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:09.055776 kubelet[2413]: E0213 20:31:09.055724 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.255?timeout=10s\": context deadline exceeded" interval="1.6s" Feb 13 20:31:09.869149 kubelet[2413]: E0213 20:31:09.869090 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:10.870134 kubelet[2413]: E0213 20:31:10.870071 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:11.871045 kubelet[2413]: E0213 20:31:11.870983 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:12.871888 kubelet[2413]: E0213 20:31:12.871835 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:13.873008 kubelet[2413]: E0213 20:31:13.872948 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:14.873577 kubelet[2413]: E0213 20:31:14.873516 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:15.874580 kubelet[2413]: E0213 20:31:15.874522 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:16.875214 kubelet[2413]: E0213 20:31:16.875127 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:17.875415 kubelet[2413]: E0213 20:31:17.875361 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:18.683639 kubelet[2413]: E0213 20:31:18.683584 2413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:31:18.876312 kubelet[2413]: E0213 20:31:18.876258 2413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"