Jan 17 12:18:34.063952 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:18:34.063991 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:34.064007 kernel: BIOS-provided physical RAM map: Jan 17 12:18:34.064019 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:18:34.064030 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:18:34.064042 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:18:34.064060 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 17 12:18:34.068130 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 17 12:18:34.068152 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 17 12:18:34.068164 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:18:34.068177 kernel: NX (Execute Disable) protection: active Jan 17 12:18:34.068189 kernel: APIC: Static calls initialized Jan 17 12:18:34.068200 kernel: SMBIOS 2.7 present. Jan 17 12:18:34.068213 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 12:18:34.068235 kernel: Hypervisor detected: KVM Jan 17 12:18:34.068249 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:18:34.068262 kernel: kvm-clock: using sched offset of 6252688813 cycles Jan 17 12:18:34.068277 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:18:34.068291 kernel: tsc: Detected 2499.998 MHz processor Jan 17 12:18:34.068305 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:18:34.068319 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:18:34.068335 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 17 12:18:34.068349 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:18:34.068363 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:18:34.068376 kernel: Using GB pages for direct mapping Jan 17 12:18:34.068390 kernel: ACPI: Early table checksum verification disabled Jan 17 12:18:34.068403 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 17 12:18:34.068416 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 17 12:18:34.068430 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 12:18:34.068443 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 12:18:34.068459 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 17 12:18:34.068472 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 12:18:34.068484 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 12:18:34.068499 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 12:18:34.068513 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 12:18:34.068528 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 12:18:34.068542 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 12:18:34.068555 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 12:18:34.068569 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 17 12:18:34.068588 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 17 12:18:34.068608 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 17 12:18:34.068622 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 17 12:18:34.068637 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 17 12:18:34.068651 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 17 12:18:34.068666 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 17 12:18:34.068679 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 17 12:18:34.068694 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 17 12:18:34.068707 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 17 12:18:34.068720 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:18:34.068733 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:18:34.068746 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 12:18:34.068759 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 12:18:34.068772 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 17 12:18:34.068788 kernel: Zone ranges: Jan 17 12:18:34.068801 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:18:34.068815 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 17 12:18:34.068828 kernel: Normal empty Jan 17 12:18:34.068842 kernel: Movable zone start for each node Jan 17 12:18:34.068856 kernel: Early memory node ranges Jan 17 12:18:34.068870 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:18:34.068883 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 17 12:18:34.068897 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 17 12:18:34.068914 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:18:34.068929 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:18:34.068943 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 17 12:18:34.068956 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:18:34.068971 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:18:34.068986 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 12:18:34.069001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:18:34.069016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:18:34.069030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:18:34.069045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:18:34.069063 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:18:34.072145 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:18:34.072162 kernel: TSC deadline timer available Jan 17 12:18:34.072177 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:18:34.072191 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:18:34.072205 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 17 12:18:34.072219 kernel: Booting paravirtualized kernel on KVM Jan 17 12:18:34.072233 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:18:34.072248 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:18:34.072269 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:18:34.072283 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:18:34.072296 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:18:34.072310 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:18:34.072324 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:18:34.072340 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:34.072355 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:18:34.072369 kernel: random: crng init done Jan 17 12:18:34.072385 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:18:34.072400 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:18:34.072414 kernel: Fallback order for Node 0: 0 Jan 17 12:18:34.072428 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 17 12:18:34.072441 kernel: Policy zone: DMA32 Jan 17 12:18:34.072455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:18:34.072470 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125152K reserved, 0K cma-reserved) Jan 17 12:18:34.072484 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:18:34.072501 kernel: Kernel/User page tables isolation: enabled Jan 17 12:18:34.072514 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:18:34.072528 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:18:34.072542 kernel: Dynamic Preempt: voluntary Jan 17 12:18:34.072556 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:18:34.072576 kernel: rcu: RCU event tracing is enabled. Jan 17 12:18:34.072591 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:18:34.072605 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:18:34.072619 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:18:34.072633 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:18:34.072650 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:18:34.072664 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:18:34.072678 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:18:34.072692 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:18:34.072705 kernel: Console: colour VGA+ 80x25 Jan 17 12:18:34.072719 kernel: printk: console [ttyS0] enabled Jan 17 12:18:34.072733 kernel: ACPI: Core revision 20230628 Jan 17 12:18:34.072747 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 12:18:34.072761 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:18:34.072778 kernel: x2apic enabled Jan 17 12:18:34.072792 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:18:34.072817 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 17 12:18:34.072835 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 17 12:18:34.072850 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:18:34.072865 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:18:34.072880 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:18:34.072894 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:18:34.072908 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:18:34.072923 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:18:34.072938 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 12:18:34.072952 kernel: RETBleed: Vulnerable Jan 17 12:18:34.072970 kernel: Speculative Store Bypass: Vulnerable Jan 17 12:18:34.072984 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:18:34.072999 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:18:34.073014 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:18:34.073028 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:18:34.073043 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:18:34.073061 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:18:34.073167 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 12:18:34.073183 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 12:18:34.073198 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 12:18:34.073212 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 12:18:34.073227 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 12:18:34.073242 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 12:18:34.073257 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:18:34.073272 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 12:18:34.073286 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 12:18:34.073301 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 12:18:34.073319 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 12:18:34.073333 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 12:18:34.073348 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 12:18:34.073363 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 12:18:34.073377 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:18:34.073392 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:18:34.073407 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:18:34.073422 kernel: landlock: Up and running. Jan 17 12:18:34.073437 kernel: SELinux: Initializing. Jan 17 12:18:34.073452 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:18:34.073467 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:18:34.073482 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 12:18:34.073500 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:34.073515 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:34.073530 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:34.073546 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 12:18:34.073560 kernel: signal: max sigframe size: 3632 Jan 17 12:18:34.073575 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:18:34.073590 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:18:34.073605 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:18:34.073620 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:18:34.073638 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:18:34.073652 kernel: .... node #0, CPUs: #1 Jan 17 12:18:34.073669 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:18:34.073685 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:18:34.073700 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:18:34.073715 kernel: smpboot: Max logical packages: 1 Jan 17 12:18:34.073737 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 17 12:18:34.073752 kernel: devtmpfs: initialized Jan 17 12:18:34.073770 kernel: x86/mm: Memory block size: 128MB Jan 17 12:18:34.073785 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:18:34.073799 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:18:34.073814 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:18:34.073829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:18:34.073844 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:18:34.073859 kernel: audit: type=2000 audit(1737116313.034:1): state=initialized audit_enabled=0 res=1 Jan 17 12:18:34.073873 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:18:34.073888 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:18:34.073906 kernel: cpuidle: using governor menu Jan 17 12:18:34.073921 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:18:34.073936 kernel: dca service started, version 1.12.1 Jan 17 12:18:34.073951 kernel: PCI: Using configuration type 1 for base access Jan 17 12:18:34.073966 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:18:34.073981 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:18:34.073996 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:18:34.074011 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:18:34.074026 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:18:34.074044 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:18:34.074059 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:18:34.077137 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:18:34.077158 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:18:34.077176 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:18:34.077192 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:18:34.077209 kernel: ACPI: Interpreter enabled Jan 17 12:18:34.077226 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:18:34.077242 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:18:34.077266 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:18:34.077283 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:18:34.077299 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:18:34.077316 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:18:34.077562 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:18:34.077702 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:18:34.077844 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:18:34.077867 kernel: acpiphp: Slot [3] registered Jan 17 12:18:34.077884 kernel: acpiphp: Slot [4] registered Jan 17 12:18:34.077900 kernel: acpiphp: Slot [5] registered Jan 17 12:18:34.077916 kernel: acpiphp: Slot [6] registered Jan 17 12:18:34.077933 kernel: acpiphp: Slot [7] registered Jan 17 12:18:34.077949 kernel: acpiphp: Slot [8] registered Jan 17 12:18:34.077966 kernel: acpiphp: Slot [9] registered Jan 17 12:18:34.077982 kernel: acpiphp: Slot [10] registered Jan 17 12:18:34.077999 kernel: acpiphp: Slot [11] registered Jan 17 12:18:34.078015 kernel: acpiphp: Slot [12] registered Jan 17 12:18:34.078034 kernel: acpiphp: Slot [13] registered Jan 17 12:18:34.078050 kernel: acpiphp: Slot [14] registered Jan 17 12:18:34.078090 kernel: acpiphp: Slot [15] registered Jan 17 12:18:34.078107 kernel: acpiphp: Slot [16] registered Jan 17 12:18:34.078123 kernel: acpiphp: Slot [17] registered Jan 17 12:18:34.078139 kernel: acpiphp: Slot [18] registered Jan 17 12:18:34.078156 kernel: acpiphp: Slot [19] registered Jan 17 12:18:34.078173 kernel: acpiphp: Slot [20] registered Jan 17 12:18:34.078189 kernel: acpiphp: Slot [21] registered Jan 17 12:18:34.078209 kernel: acpiphp: Slot [22] registered Jan 17 12:18:34.078225 kernel: acpiphp: Slot [23] registered Jan 17 12:18:34.078242 kernel: acpiphp: Slot [24] registered Jan 17 12:18:34.078258 kernel: acpiphp: Slot [25] registered Jan 17 12:18:34.078274 kernel: acpiphp: Slot [26] registered Jan 17 12:18:34.078291 kernel: acpiphp: Slot [27] registered Jan 17 12:18:34.078307 kernel: acpiphp: Slot [28] registered Jan 17 12:18:34.078323 kernel: acpiphp: Slot [29] registered Jan 17 12:18:34.078339 kernel: acpiphp: Slot [30] registered Jan 17 12:18:34.078355 kernel: acpiphp: Slot [31] registered Jan 17 12:18:34.078374 kernel: PCI host bridge to bus 0000:00 Jan 17 12:18:34.078511 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:18:34.078631 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:18:34.078749 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:18:34.078863 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:18:34.078977 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:18:34.079138 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:18:34.079295 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:18:34.079435 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 12:18:34.079642 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:18:34.079779 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 17 12:18:34.079910 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 12:18:34.080042 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 12:18:34.082303 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 12:18:34.082545 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 12:18:34.082679 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 12:18:34.082854 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 12:18:34.082998 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 12:18:34.083225 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 17 12:18:34.083360 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:18:34.083499 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:18:34.083644 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 12:18:34.083771 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 17 12:18:34.083907 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 12:18:34.084033 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 17 12:18:34.084051 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:18:34.084066 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:18:34.087146 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:18:34.087161 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:18:34.087176 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:18:34.087190 kernel: iommu: Default domain type: Translated Jan 17 12:18:34.087204 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:18:34.087217 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:18:34.087231 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:18:34.087245 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:18:34.087259 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 17 12:18:34.087455 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 12:18:34.087591 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 12:18:34.087722 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:18:34.087740 kernel: vgaarb: loaded Jan 17 12:18:34.087754 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 12:18:34.087768 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 12:18:34.087782 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:18:34.087796 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:18:34.087810 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:18:34.087827 kernel: pnp: PnP ACPI init Jan 17 12:18:34.087840 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:18:34.087854 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:18:34.087868 kernel: NET: Registered PF_INET protocol family Jan 17 12:18:34.087882 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:18:34.087896 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:18:34.087910 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:18:34.087924 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:18:34.087941 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:18:34.087955 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:18:34.087969 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:18:34.087983 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:18:34.087996 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:18:34.088010 kernel: NET: Registered PF_XDP protocol family Jan 17 12:18:34.090289 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:18:34.090430 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:18:34.090555 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:18:34.090669 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:18:34.090891 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:18:34.090913 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:18:34.090928 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:18:34.090943 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 17 12:18:34.090957 kernel: clocksource: Switched to clocksource tsc Jan 17 12:18:34.090970 kernel: Initialise system trusted keyrings Jan 17 12:18:34.090984 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:18:34.091004 kernel: Key type asymmetric registered Jan 17 12:18:34.091017 kernel: Asymmetric key parser 'x509' registered Jan 17 12:18:34.091031 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:18:34.091045 kernel: io scheduler mq-deadline registered Jan 17 12:18:34.091060 kernel: io scheduler kyber registered Jan 17 12:18:34.092124 kernel: io scheduler bfq registered Jan 17 12:18:34.092148 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:18:34.092162 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:18:34.092176 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:18:34.092196 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:18:34.092210 kernel: i8042: Warning: Keylock active Jan 17 12:18:34.092225 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:18:34.092238 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:18:34.092418 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:18:34.092542 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:18:34.092663 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:18:33 UTC (1737116313) Jan 17 12:18:34.092781 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:18:34.092810 kernel: intel_pstate: CPU model not supported Jan 17 12:18:34.092824 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:18:34.092838 kernel: Segment Routing with IPv6 Jan 17 12:18:34.092852 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:18:34.092865 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:18:34.092879 kernel: Key type dns_resolver registered Jan 17 12:18:34.092893 kernel: IPI shorthand broadcast: enabled Jan 17 12:18:34.092907 kernel: sched_clock: Marking stable (695004059, 255077577)->(1053393945, -103312309) Jan 17 12:18:34.092921 kernel: registered taskstats version 1 Jan 17 12:18:34.092938 kernel: Loading compiled-in X.509 certificates Jan 17 12:18:34.092952 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:18:34.092966 kernel: Key type .fscrypt registered Jan 17 12:18:34.092979 kernel: Key type fscrypt-provisioning registered Jan 17 12:18:34.092994 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:18:34.093007 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:18:34.093021 kernel: ima: No architecture policies found Jan 17 12:18:34.093035 kernel: clk: Disabling unused clocks Jan 17 12:18:34.093052 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:18:34.097097 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:18:34.097137 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:18:34.097154 kernel: Run /init as init process Jan 17 12:18:34.097168 kernel: with arguments: Jan 17 12:18:34.097182 kernel: /init Jan 17 12:18:34.097196 kernel: with environment: Jan 17 12:18:34.097209 kernel: HOME=/ Jan 17 12:18:34.097222 kernel: TERM=linux Jan 17 12:18:34.097236 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:18:34.097263 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:18:34.097293 systemd[1]: Detected virtualization amazon. Jan 17 12:18:34.097311 systemd[1]: Detected architecture x86-64. Jan 17 12:18:34.097326 systemd[1]: Running in initrd. Jan 17 12:18:34.097344 systemd[1]: No hostname configured, using default hostname. Jan 17 12:18:34.097358 systemd[1]: Hostname set to . Jan 17 12:18:34.097374 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:18:34.097389 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:18:34.097404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:18:34.097419 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:18:34.097545 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:18:34.097562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:18:34.097582 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:18:34.097598 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:18:34.097616 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:18:34.097631 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:18:34.097647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:18:34.097663 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:18:34.097678 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:18:34.097697 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:18:34.097712 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:18:34.097734 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:18:34.097749 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:18:34.097764 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:18:34.097780 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:18:34.097796 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:18:34.097811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:18:34.097825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:18:34.097845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:18:34.097860 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:18:34.097876 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:18:34.097891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:18:34.097906 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:18:34.097922 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:18:34.097938 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:18:34.097956 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:18:34.097971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:34.097985 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:18:34.098004 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:18:34.098022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:18:34.098037 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:18:34.098119 systemd-journald[178]: Collecting audit messages is disabled. Jan 17 12:18:34.098155 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:18:34.098171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:18:34.098190 kernel: Bridge firewalling registered Jan 17 12:18:34.098207 systemd-journald[178]: Journal started Jan 17 12:18:34.098238 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2feb979a32b649b39237ee552e66f1) is 4.8M, max 38.6M, 33.7M free. Jan 17 12:18:34.036215 systemd-modules-load[179]: Inserted module 'overlay' Jan 17 12:18:34.193220 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:18:34.094187 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 17 12:18:34.188413 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:18:34.188844 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:18:34.195455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:18:34.209228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:18:34.235764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:18:34.240824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:34.243647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:18:34.254137 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:18:34.266323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:34.270794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:18:34.278277 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:18:34.299791 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:34.310409 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:18:34.358593 systemd-resolved[203]: Positive Trust Anchors: Jan 17 12:18:34.366776 dracut-cmdline[213]: dracut-dracut-053 Jan 17 12:18:34.366776 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:34.358610 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:18:34.358669 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:18:34.364681 systemd-resolved[203]: Defaulting to hostname 'linux'. Jan 17 12:18:34.368197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:18:34.390974 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:18:34.510127 kernel: SCSI subsystem initialized Jan 17 12:18:34.523114 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:18:34.537098 kernel: iscsi: registered transport (tcp) Jan 17 12:18:34.570160 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:18:34.570238 kernel: QLogic iSCSI HBA Driver Jan 17 12:18:34.638346 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:18:34.645630 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:18:34.716588 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:18:34.716668 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:18:34.719210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:18:34.821189 kernel: raid6: avx512x4 gen() 13795 MB/s Jan 17 12:18:34.838124 kernel: raid6: avx512x2 gen() 7117 MB/s Jan 17 12:18:34.855125 kernel: raid6: avx512x1 gen() 13908 MB/s Jan 17 12:18:34.872135 kernel: raid6: avx2x4 gen() 13760 MB/s Jan 17 12:18:34.889156 kernel: raid6: avx2x2 gen() 11304 MB/s Jan 17 12:18:34.906121 kernel: raid6: avx2x1 gen() 7208 MB/s Jan 17 12:18:34.906213 kernel: raid6: using algorithm avx512x1 gen() 13908 MB/s Jan 17 12:18:34.923203 kernel: raid6: .... xor() 14827 MB/s, rmw enabled Jan 17 12:18:34.923298 kernel: raid6: using avx512x2 recovery algorithm Jan 17 12:18:34.951104 kernel: xor: automatically using best checksumming function avx Jan 17 12:18:35.198105 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:18:35.209566 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:18:35.223299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:18:35.241705 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 17 12:18:35.248883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:18:35.257269 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:18:35.285707 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jan 17 12:18:35.327233 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:18:35.340878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:18:35.453203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:18:35.467402 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:18:35.507718 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:18:35.512306 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:18:35.517034 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:18:35.522666 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:18:35.532831 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:18:35.574570 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:18:35.585095 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:18:35.596510 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 12:18:35.621972 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 12:18:35.622771 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 12:18:35.623332 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:54:d9:fd:4a:a1 Jan 17 12:18:35.635102 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:18:35.635168 kernel: AES CTR mode by8 optimization enabled Jan 17 12:18:35.630870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:18:35.631148 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:35.635007 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:18:35.636167 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:35.644682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:18:35.644925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:35.646414 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:35.663805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:35.693103 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 12:18:35.693365 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:18:35.705278 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 12:18:35.713106 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:18:35.713178 kernel: GPT:9289727 != 16777215 Jan 17 12:18:35.713197 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:18:35.713216 kernel: GPT:9289727 != 16777215 Jan 17 12:18:35.713232 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:18:35.713250 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:18:35.830095 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (443) Jan 17 12:18:35.836112 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (451) Jan 17 12:18:35.863278 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:35.873844 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:35.932053 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:35.986164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 12:18:36.002586 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 12:18:36.031098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 12:18:36.035788 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 12:18:36.046102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:18:36.056684 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:18:36.064546 disk-uuid[626]: Primary Header is updated. Jan 17 12:18:36.064546 disk-uuid[626]: Secondary Entries is updated. Jan 17 12:18:36.064546 disk-uuid[626]: Secondary Header is updated. Jan 17 12:18:36.071096 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:18:36.079145 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:18:36.089152 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:18:37.089154 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:18:37.093881 disk-uuid[627]: The operation has completed successfully. Jan 17 12:18:37.291321 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:18:37.291446 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:18:37.330283 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:18:37.335644 sh[970]: Success Jan 17 12:18:37.359091 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:18:37.462961 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:18:37.472225 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:18:37.483376 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:18:37.513783 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:18:37.513855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:37.513878 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:18:37.515734 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:18:37.516699 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:18:37.667098 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:18:37.668995 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:18:37.672299 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:18:37.683539 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:18:37.699153 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:18:37.736302 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:37.736383 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:37.736409 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:18:37.743093 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:18:37.771759 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:18:37.775116 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:37.781800 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:18:37.792448 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:18:37.872473 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:18:37.879385 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:18:37.912308 systemd-networkd[1163]: lo: Link UP Jan 17 12:18:37.912320 systemd-networkd[1163]: lo: Gained carrier Jan 17 12:18:37.914965 systemd-networkd[1163]: Enumeration completed Jan 17 12:18:37.915401 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:37.915406 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:18:37.920208 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:18:37.923914 systemd[1]: Reached target network.target - Network. Jan 17 12:18:37.924339 systemd-networkd[1163]: eth0: Link UP Jan 17 12:18:37.924343 systemd-networkd[1163]: eth0: Gained carrier Jan 17 12:18:37.924357 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:37.947258 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.29.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:18:38.300307 ignition[1095]: Ignition 2.19.0 Jan 17 12:18:38.300321 ignition[1095]: Stage: fetch-offline Jan 17 12:18:38.300592 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:38.300606 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:18:38.304375 ignition[1095]: Ignition finished successfully Jan 17 12:18:38.306778 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:18:38.315327 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:18:38.349637 ignition[1172]: Ignition 2.19.0 Jan 17 12:18:38.349653 ignition[1172]: Stage: fetch Jan 17 12:18:38.351743 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:38.351756 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:18:38.352365 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:18:38.376975 ignition[1172]: PUT result: OK Jan 17 12:18:38.381004 ignition[1172]: parsed url from cmdline: "" Jan 17 12:18:38.381016 ignition[1172]: no config URL provided Jan 17 12:18:38.381026 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:18:38.381041 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:18:38.381296 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:18:38.383677 ignition[1172]: PUT result: OK Jan 17 12:18:38.383747 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 12:18:38.390175 ignition[1172]: GET result: OK Jan 17 12:18:38.390329 ignition[1172]: parsing config with SHA512: 060c20506976b9373390c38ea6dc3e7261ff8a7c126f6f24a264d4038cfef1c1d2b5dc9ea8040b1713f840abdc66fdca9d1d5b0a0e2ef072b775a0ab7c975b54 Jan 17 12:18:38.405955 unknown[1172]: fetched base config from "system" Jan 17 12:18:38.406000 unknown[1172]: fetched base config from "system" Jan 17 12:18:38.406009 unknown[1172]: fetched user config from "aws" Jan 17 12:18:38.408891 ignition[1172]: fetch: fetch complete Jan 17 12:18:38.408898 ignition[1172]: fetch: fetch passed Jan 17 12:18:38.412821 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:18:38.409153 ignition[1172]: Ignition finished successfully Jan 17 12:18:38.422357 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:18:38.487485 ignition[1179]: Ignition 2.19.0 Jan 17 12:18:38.487547 ignition[1179]: Stage: kargs Jan 17 12:18:38.488130 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:38.488143 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:18:38.488372 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:18:38.489685 ignition[1179]: PUT result: OK Jan 17 12:18:38.497282 ignition[1179]: kargs: kargs passed Jan 17 12:18:38.497352 ignition[1179]: Ignition finished successfully Jan 17 12:18:38.500096 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:18:38.510725 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:18:38.544169 ignition[1185]: Ignition 2.19.0 Jan 17 12:18:38.544182 ignition[1185]: Stage: disks Jan 17 12:18:38.544637 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:38.544651 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:18:38.544760 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:18:38.550666 ignition[1185]: PUT result: OK Jan 17 12:18:38.558561 ignition[1185]: disks: disks passed Jan 17 12:18:38.558651 ignition[1185]: Ignition finished successfully Jan 17 12:18:38.560781 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:18:38.561593 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:18:38.565561 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:18:38.568361 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:18:38.570769 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:18:38.574173 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:18:38.583455 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:18:38.623715 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:18:38.628436 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:18:38.641296 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:18:38.786099 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:18:38.788285 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:18:38.790351 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:18:38.803252 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:18:38.813504 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:18:38.816284 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:18:38.816670 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:18:38.816705 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:18:38.826010 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:18:38.833573 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:18:38.843852 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Jan 17 12:18:38.846094 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:38.846151 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:38.846171 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:18:38.859103 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:18:38.861237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:18:39.161062 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:18:39.180838 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:18:39.199035 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:18:39.213587 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:18:39.376264 systemd-networkd[1163]: eth0: Gained IPv6LL Jan 17 12:18:39.624144 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:18:39.631513 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:18:39.640296 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:18:39.652604 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:18:39.653636 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:39.691356 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:18:39.697731 ignition[1325]: INFO : Ignition 2.19.0 Jan 17 12:18:39.697731 ignition[1325]: INFO : Stage: mount Jan 17 12:18:39.702612 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:39.702612 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:18:39.702612 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:18:39.702612 ignition[1325]: INFO : PUT result: OK Jan 17 12:18:39.711812 ignition[1325]: INFO : mount: mount passed Jan 17 12:18:39.712720 ignition[1325]: INFO : Ignition finished successfully Jan 17 12:18:39.714096 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:18:39.719190 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:18:39.797453 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:18:39.843189 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1338) Jan 17 12:18:39.843256 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:39.843276 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:39.844514 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:18:39.849100 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:18:39.851299 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:18:39.876629 ignition[1355]: INFO : Ignition 2.19.0 Jan 17 12:18:39.876629 ignition[1355]: INFO : Stage: files Jan 17 12:18:39.879540 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:39.879540 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:18:39.879540 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:18:39.884439 ignition[1355]: INFO : PUT result: OK Jan 17 12:18:39.886492 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:18:39.888323 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:18:39.888323 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:18:39.917477 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:18:39.919004 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:18:39.920525 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:18:39.919399 unknown[1355]: wrote ssh authorized keys file for user: core Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:39.929173 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:18:40.268454 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 17 12:18:40.556952 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:40.556952 ignition[1355]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 17 12:18:40.560649 ignition[1355]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:18:40.560649 ignition[1355]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:18:40.560649 ignition[1355]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 17 12:18:40.560649 ignition[1355]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:18:40.560649 ignition[1355]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:18:40.560649 ignition[1355]: INFO : files: files passed Jan 17 12:18:40.560649 ignition[1355]: INFO : Ignition finished successfully Jan 17 12:18:40.569230 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:18:40.580280 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:18:40.593803 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:18:40.606467 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:18:40.608415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:18:40.619955 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:40.619955 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:40.626341 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:40.627793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:18:40.630852 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:18:40.637415 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:18:40.683320 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:18:40.683501 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:18:40.687910 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:18:40.689035 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:18:40.691533 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:18:40.697338 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:18:40.717612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:18:40.724359 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:18:40.741478 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:18:40.746371 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:18:40.746881 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:18:40.762849 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:18:40.764920 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:18:40.769898 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:18:40.779188 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:18:40.782344 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:18:40.784231 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:18:40.795869 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:18:40.797818 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:18:40.801396 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:18:40.803156 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:18:40.806455 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:18:40.807874 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:18:40.812155 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:18:40.812318 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:18:40.816695 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:18:40.820392 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:18:40.821997 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:18:40.823602 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:18:40.830712 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:18:40.830928 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:18:40.833922 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:18:40.834057 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:18:40.840592 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:18:40.840768 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:18:40.849926 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:18:40.851867 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:18:40.853090 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:18:40.868442 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:18:40.870765 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:18:40.872337 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:18:40.875905 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:18:40.876297 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:18:40.889560 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:18:40.890692 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:18:40.893872 ignition[1407]: INFO : Ignition 2.19.0 Jan 17 12:18:40.893872 ignition[1407]: INFO : Stage: umount Jan 17 12:18:40.893872 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:40.893872 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:18:40.893872 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:18:40.919266 ignition[1407]: INFO : PUT result: OK Jan 17 12:18:40.919266 ignition[1407]: INFO : umount: umount passed Jan 17 12:18:40.919266 ignition[1407]: INFO : Ignition finished successfully Jan 17 12:18:40.930248 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:18:40.930392 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:18:40.943277 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:18:40.943606 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:18:40.944840 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:18:40.944905 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:18:40.946352 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:18:40.946409 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:18:40.947755 systemd[1]: Stopped target network.target - Network. Jan 17 12:18:40.948829 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:18:40.948882 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:18:40.948970 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:18:40.951333 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:18:40.963835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:18:40.963952 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:18:40.969324 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:18:40.969511 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:18:40.969552 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:18:40.973997 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:18:40.974097 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:18:40.974488 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:18:40.974561 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:18:40.974884 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:18:40.974927 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:18:40.975145 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:18:40.975617 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:18:40.976907 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:18:40.978018 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:18:40.978118 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:18:40.979110 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:18:40.979189 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:18:40.987053 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:18:40.987200 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:18:40.989187 systemd-networkd[1163]: eth0: DHCPv6 lease lost Jan 17 12:18:40.995956 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:18:40.996029 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:18:40.998401 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:18:40.998515 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:18:41.002130 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:18:41.002186 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:18:41.015388 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:18:41.021606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:18:41.021818 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:18:41.026163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:18:41.026234 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:18:41.029092 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:18:41.029215 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:18:41.045306 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:18:41.072502 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:18:41.072722 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:18:41.075062 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:18:41.075141 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:18:41.078779 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:18:41.078831 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:18:41.080212 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:18:41.080269 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:18:41.088057 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:18:41.088151 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:18:41.091679 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:18:41.091853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:41.103263 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:18:41.110327 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:18:41.110430 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:18:41.116160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:18:41.116309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:41.118134 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:18:41.118268 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:18:41.125348 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:18:41.125450 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:18:41.129823 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:18:41.138209 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:18:41.201846 systemd[1]: Switching root. Jan 17 12:18:41.228703 systemd-journald[178]: Journal stopped Jan 17 12:18:43.995202 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 17 12:18:43.995291 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:18:43.995312 kernel: SELinux: policy capability open_perms=1 Jan 17 12:18:43.995340 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:18:43.995357 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:18:43.995373 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:18:43.995390 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:18:43.995407 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:18:43.995424 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:18:43.995441 kernel: audit: type=1403 audit(1737116322.050:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:18:43.995465 systemd[1]: Successfully loaded SELinux policy in 79.541ms. Jan 17 12:18:43.995498 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.231ms. Jan 17 12:18:43.995517 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:18:43.995540 systemd[1]: Detected virtualization amazon. Jan 17 12:18:43.995558 systemd[1]: Detected architecture x86-64. Jan 17 12:18:43.995575 systemd[1]: Detected first boot. Jan 17 12:18:43.995593 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:18:43.995614 zram_generator::config[1466]: No configuration found. Jan 17 12:18:43.995634 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:18:43.995652 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:18:43.995670 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 12:18:43.995694 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:18:43.995719 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:18:43.995737 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:18:43.995758 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:18:43.995780 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:18:43.995798 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:18:43.995815 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:18:43.995832 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:18:43.995851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:18:43.995868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:18:43.995887 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:18:43.995903 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:18:43.995921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:18:43.995942 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:18:43.995961 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:18:43.995979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:18:43.995996 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:18:43.996013 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:18:43.996030 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:18:43.996048 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:18:43.996080 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:18:43.996102 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:18:43.996121 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:18:43.996139 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:18:43.999192 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:18:43.999217 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:18:43.999236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:18:43.999255 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:18:43.999274 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:18:43.999293 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:18:43.999319 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:18:43.999338 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:18:43.999357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:43.999375 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:18:43.999393 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:18:43.999411 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:18:43.999429 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:18:43.999447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:18:43.999466 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:18:43.999487 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:18:43.999506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:18:43.999524 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:18:43.999541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:18:43.999560 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:18:43.999579 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:18:43.999598 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:18:43.999616 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:18:43.999638 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:18:43.999655 kernel: fuse: init (API version 7.39) Jan 17 12:18:43.999674 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:18:43.999692 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:18:43.999711 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:18:43.999729 kernel: loop: module loaded Jan 17 12:18:43.999746 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:18:43.999764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:18:43.999784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:43.999804 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:18:43.999822 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:18:43.999841 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:18:43.999897 systemd-journald[1567]: Collecting audit messages is disabled. Jan 17 12:18:43.999931 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:18:43.999949 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:18:43.999967 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:18:43.999989 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:18:44.000007 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:18:44.000025 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:18:44.000044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:18:44.000062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:18:44.010152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:18:44.010186 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:18:44.010213 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:18:44.010231 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:18:44.010258 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:18:44.010278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:18:44.010303 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:18:44.010323 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:18:44.010341 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:18:44.010367 systemd-journald[1567]: Journal started Jan 17 12:18:44.010409 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec2feb979a32b649b39237ee552e66f1) is 4.8M, max 38.6M, 33.7M free. Jan 17 12:18:44.012281 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:18:44.018820 kernel: ACPI: bus type drm_connector registered Jan 17 12:18:44.022408 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:18:44.025330 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:18:44.027879 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:18:44.042367 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:18:44.050494 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:18:44.061197 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:18:44.065173 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:18:44.074334 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:18:44.084917 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:18:44.086330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:18:44.098843 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:18:44.100852 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:18:44.113886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:18:44.123252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:18:44.132757 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:18:44.134334 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:18:44.147774 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec2feb979a32b649b39237ee552e66f1 is 66.326ms for 931 entries. Jan 17 12:18:44.147774 systemd-journald[1567]: System Journal (/var/log/journal/ec2feb979a32b649b39237ee552e66f1) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:18:44.228457 systemd-journald[1567]: Received client request to flush runtime journal. Jan 17 12:18:44.156137 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:18:44.161614 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:18:44.195512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:18:44.210380 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:18:44.220566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:18:44.223142 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Jan 17 12:18:44.223166 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Jan 17 12:18:44.234599 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:18:44.243722 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:18:44.258606 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:18:44.264265 udevadm[1625]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:18:44.307138 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:18:44.321330 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:18:44.383787 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 17 12:18:44.383818 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 17 12:18:44.393280 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:18:45.302171 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:18:45.314231 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:18:45.367441 systemd-udevd[1643]: Using default interface naming scheme 'v255'. Jan 17 12:18:45.451049 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:18:45.463360 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:18:45.507318 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:18:45.542100 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:18:45.545452 (udev-worker)[1658]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:18:45.633748 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:18:45.732093 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 17 12:18:45.745159 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:18:45.781110 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 17 12:18:45.786384 systemd-networkd[1647]: lo: Link UP Jan 17 12:18:45.786394 systemd-networkd[1647]: lo: Gained carrier Jan 17 12:18:45.792913 systemd-networkd[1647]: Enumeration completed Jan 17 12:18:45.793799 systemd-networkd[1647]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:45.794019 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:18:45.794576 systemd-networkd[1647]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:18:45.800460 systemd-networkd[1647]: eth0: Link UP Jan 17 12:18:45.800813 systemd-networkd[1647]: eth0: Gained carrier Jan 17 12:18:45.801920 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:18:45.802049 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 17 12:18:45.801596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:18:45.801739 systemd-networkd[1647]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:45.823138 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 12:18:45.818159 systemd-networkd[1647]: eth0: DHCPv4 address 172.31.29.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:18:45.845112 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:18:45.847090 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1653) Jan 17 12:18:45.887468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:46.027368 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:18:46.041438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:18:46.048317 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:18:46.167204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:46.201999 lvm[1765]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:18:46.236084 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:18:46.239058 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:18:46.248462 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:18:46.259388 lvm[1770]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:18:46.290148 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:18:46.292601 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:18:46.294918 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:18:46.294964 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:18:46.296493 systemd[1]: Reached target machines.target - Containers. Jan 17 12:18:46.299452 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:18:46.310322 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:18:46.332477 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:18:46.334738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:18:46.338541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:18:46.348298 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:18:46.369784 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:18:46.373410 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:18:46.408273 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:18:46.417062 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:18:46.427525 kernel: loop0: detected capacity change from 0 to 61336 Jan 17 12:18:46.427358 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:18:46.556096 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:18:46.582110 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 12:18:46.726105 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:18:46.796686 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:18:46.934169 kernel: loop4: detected capacity change from 0 to 61336 Jan 17 12:18:46.957101 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 12:18:47.009118 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 12:18:47.052098 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 12:18:47.080371 (sd-merge)[1791]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 12:18:47.081063 (sd-merge)[1791]: Merged extensions into '/usr'. Jan 17 12:18:47.088593 systemd[1]: Reloading requested from client PID 1778 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:18:47.088614 systemd[1]: Reloading... Jan 17 12:18:47.120187 systemd-networkd[1647]: eth0: Gained IPv6LL Jan 17 12:18:47.170095 zram_generator::config[1817]: No configuration found. Jan 17 12:18:47.440057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:47.587770 systemd[1]: Reloading finished in 498 ms. Jan 17 12:18:47.611614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:18:47.614203 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:18:47.625908 systemd[1]: Starting ensure-sysext.service... Jan 17 12:18:47.634916 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:18:47.651053 systemd[1]: Reloading requested from client PID 1875 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:18:47.651093 systemd[1]: Reloading... Jan 17 12:18:47.676520 systemd-tmpfiles[1876]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:18:47.677727 systemd-tmpfiles[1876]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:18:47.679780 systemd-tmpfiles[1876]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:18:47.680607 systemd-tmpfiles[1876]: ACLs are not supported, ignoring. Jan 17 12:18:47.680827 systemd-tmpfiles[1876]: ACLs are not supported, ignoring. Jan 17 12:18:47.686606 systemd-tmpfiles[1876]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:18:47.686766 systemd-tmpfiles[1876]: Skipping /boot Jan 17 12:18:47.703030 systemd-tmpfiles[1876]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:18:47.703799 systemd-tmpfiles[1876]: Skipping /boot Jan 17 12:18:47.799103 zram_generator::config[1906]: No configuration found. Jan 17 12:18:48.029049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:48.053606 ldconfig[1774]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:18:48.116362 systemd[1]: Reloading finished in 464 ms. Jan 17 12:18:48.133964 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:18:48.140923 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:18:48.156286 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:18:48.161275 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:18:48.166312 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:18:48.179346 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:18:48.188453 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:18:48.206258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:48.207055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:18:48.222038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:18:48.225393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:18:48.243454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:18:48.244934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:18:48.245161 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:48.258823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:18:48.259109 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:18:48.285217 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:18:48.295402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:18:48.295644 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:18:48.307605 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:48.317777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:18:48.337177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:18:48.339279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:18:48.358233 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:18:48.362183 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:48.363970 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:18:48.375661 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:18:48.375977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:18:48.378551 augenrules[1998]: No rules Jan 17 12:18:48.379830 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:18:48.382202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:18:48.382958 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:18:48.415023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:48.417563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:18:48.424506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:18:48.437409 systemd-resolved[1969]: Positive Trust Anchors: Jan 17 12:18:48.437716 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:18:48.437911 systemd-resolved[1969]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:18:48.437967 systemd-resolved[1969]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:18:48.444460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:18:48.448509 systemd-resolved[1969]: Defaulting to hostname 'linux'. Jan 17 12:18:48.460431 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:18:48.461909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:18:48.462198 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:18:48.463505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:48.464900 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:18:48.471245 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:18:48.473247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:18:48.473508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:18:48.475327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:18:48.475554 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:18:48.479033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:18:48.479283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:18:48.481487 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:18:48.481741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:18:48.489898 systemd[1]: Finished ensure-sysext.service. Jan 17 12:18:48.494575 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:18:48.502446 systemd[1]: Reached target network.target - Network. Jan 17 12:18:48.503460 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:18:48.504558 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:18:48.508968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:18:48.509041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:18:48.509062 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:18:48.509124 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:18:48.511268 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:18:48.514930 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:18:48.519290 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:18:48.520622 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:18:48.521999 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:18:48.523255 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:18:48.523283 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:18:48.524205 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:18:48.527360 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:18:48.534345 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:18:48.539635 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:18:48.543604 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:18:48.545546 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:18:48.547003 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:18:48.550485 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:18:48.550564 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:18:48.550598 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:18:48.555215 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:18:48.564522 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:18:48.572315 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:18:48.596524 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:18:48.601485 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:18:48.602724 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:18:48.631173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:48.653586 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:18:48.662748 jq[2035]: false Jan 17 12:18:48.671445 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:18:48.682805 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:18:48.688848 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:18:48.702302 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:18:48.729494 extend-filesystems[2036]: Found loop4 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found loop5 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found loop6 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found loop7 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found nvme0n1 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found nvme0n1p1 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found nvme0n1p2 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found nvme0n1p3 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found usr Jan 17 12:18:48.729494 extend-filesystems[2036]: Found nvme0n1p4 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found nvme0n1p6 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found nvme0n1p7 Jan 17 12:18:48.729494 extend-filesystems[2036]: Found nvme0n1p9 Jan 17 12:18:48.729494 extend-filesystems[2036]: Checking size of /dev/nvme0n1p9 Jan 17 12:18:48.756241 dbus-daemon[2033]: [system] SELinux support is enabled Jan 17 12:18:48.733383 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:18:48.760764 dbus-daemon[2033]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1647 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:18:48.745294 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:18:48.758697 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:18:48.785327 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:18:48.816355 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:18:48.820121 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:18:48.839484 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:18:48.839819 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:18:48.849413 ntpd[2039]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:18:48.851543 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:18:48.851999 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:18:48.851999 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:18:48.851999 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: ---------------------------------------------------- Jan 17 12:18:48.851999 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:18:48.851999 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:18:48.851999 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: corporation. Support and training for ntp-4 are Jan 17 12:18:48.851999 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: available at https://www.nwtime.org/support Jan 17 12:18:48.851999 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: ---------------------------------------------------- Jan 17 12:18:48.849446 ntpd[2039]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:18:48.851884 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:18:48.849457 ntpd[2039]: ---------------------------------------------------- Jan 17 12:18:48.849468 ntpd[2039]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:18:48.849477 ntpd[2039]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:18:48.849487 ntpd[2039]: corporation. Support and training for ntp-4 are Jan 17 12:18:48.849497 ntpd[2039]: available at https://www.nwtime.org/support Jan 17 12:18:48.849507 ntpd[2039]: ---------------------------------------------------- Jan 17 12:18:48.872621 ntpd[2039]: proto: precision = 0.069 usec (-24) Jan 17 12:18:48.872987 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: proto: precision = 0.069 usec (-24) Jan 17 12:18:48.873749 ntpd[2039]: basedate set to 2025-01-05 Jan 17 12:18:48.873882 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: basedate set to 2025-01-05 Jan 17 12:18:48.874122 ntpd[2039]: gps base set to 2025-01-05 (week 2348) Jan 17 12:18:48.874212 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: gps base set to 2025-01-05 (week 2348) Jan 17 12:18:48.877968 ntpd[2039]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:18:48.879055 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:18:48.879158 ntpd[2039]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:18:48.880997 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:18:48.880997 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:18:48.880997 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:18:48.880997 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Listen normally on 3 eth0 172.31.29.125:123 Jan 17 12:18:48.880997 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Listen normally on 4 lo [::1]:123 Jan 17 12:18:48.880997 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Listen normally on 5 eth0 [fe80::454:d9ff:fefd:4aa1%2]:123 Jan 17 12:18:48.880997 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: Listening on routing socket on fd #22 for interface updates Jan 17 12:18:48.879436 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:18:48.879352 ntpd[2039]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:18:48.879398 ntpd[2039]: Listen normally on 3 eth0 172.31.29.125:123 Jan 17 12:18:48.879440 ntpd[2039]: Listen normally on 4 lo [::1]:123 Jan 17 12:18:48.879486 ntpd[2039]: Listen normally on 5 eth0 [fe80::454:d9ff:fefd:4aa1%2]:123 Jan 17 12:18:48.879523 ntpd[2039]: Listening on routing socket on fd #22 for interface updates Jan 17 12:18:48.882351 coreos-metadata[2032]: Jan 17 12:18:48.882 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:18:48.887014 coreos-metadata[2032]: Jan 17 12:18:48.883 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 12:18:48.887014 coreos-metadata[2032]: Jan 17 12:18:48.883 INFO Fetch successful Jan 17 12:18:48.887014 coreos-metadata[2032]: Jan 17 12:18:48.884 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 12:18:48.887014 coreos-metadata[2032]: Jan 17 12:18:48.885 INFO Fetch successful Jan 17 12:18:48.887014 coreos-metadata[2032]: Jan 17 12:18:48.885 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 12:18:48.887014 coreos-metadata[2032]: Jan 17 12:18:48.886 INFO Fetch successful Jan 17 12:18:48.887014 coreos-metadata[2032]: Jan 17 12:18:48.886 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 12:18:48.887378 extend-filesystems[2036]: Resized partition /dev/nvme0n1p9 Jan 17 12:18:48.882681 ntpd[2039]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:18:48.892672 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:18:48.892672 ntpd[2039]: 17 Jan 12:18:48 ntpd[2039]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:18:48.892775 jq[2063]: true Jan 17 12:18:48.882717 ntpd[2039]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:18:48.900117 extend-filesystems[2078]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:18:48.911292 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:18:48.912372 dbus-daemon[2033]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:18:48.911360 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:18:48.915345 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:18:48.915381 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:18:48.921083 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 17 12:18:48.934524 coreos-metadata[2032]: Jan 17 12:18:48.934 INFO Fetch successful Jan 17 12:18:48.934524 coreos-metadata[2032]: Jan 17 12:18:48.934 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 12:18:48.935301 coreos-metadata[2032]: Jan 17 12:18:48.935 INFO Fetch failed with 404: resource not found Jan 17 12:18:48.935429 coreos-metadata[2032]: Jan 17 12:18:48.935 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 12:18:48.946091 coreos-metadata[2032]: Jan 17 12:18:48.942 INFO Fetch successful Jan 17 12:18:48.946091 coreos-metadata[2032]: Jan 17 12:18:48.943 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 12:18:48.945201 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:18:48.960664 coreos-metadata[2032]: Jan 17 12:18:48.960 INFO Fetch successful Jan 17 12:18:48.960664 coreos-metadata[2032]: Jan 17 12:18:48.960 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 12:18:48.964059 coreos-metadata[2032]: Jan 17 12:18:48.964 INFO Fetch successful Jan 17 12:18:48.964208 coreos-metadata[2032]: Jan 17 12:18:48.964 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 12:18:48.966208 coreos-metadata[2032]: Jan 17 12:18:48.966 INFO Fetch successful Jan 17 12:18:48.966208 coreos-metadata[2032]: Jan 17 12:18:48.966 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 12:18:48.970964 update_engine[2060]: I20250117 12:18:48.970829 2060 main.cc:92] Flatcar Update Engine starting Jan 17 12:18:48.976104 coreos-metadata[2032]: Jan 17 12:18:48.975 INFO Fetch successful Jan 17 12:18:48.977537 (ntainerd)[2088]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:18:49.003242 jq[2083]: true Jan 17 12:18:49.010208 update_engine[2060]: I20250117 12:18:48.998761 2060 update_check_scheduler.cc:74] Next update check in 10m22s Jan 17 12:18:48.991345 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:18:49.066820 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:18:49.072564 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:18:49.094286 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 12:18:49.099416 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:18:49.106053 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:18:49.129100 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 17 12:18:49.168134 extend-filesystems[2078]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 12:18:49.168134 extend-filesystems[2078]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:18:49.168134 extend-filesystems[2078]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 17 12:18:49.185375 extend-filesystems[2036]: Resized filesystem in /dev/nvme0n1p9 Jan 17 12:18:49.170671 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:18:49.171019 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:18:49.188059 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:18:49.190434 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:18:49.283409 systemd-logind[2054]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:18:49.285524 systemd-logind[2054]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 12:18:49.287140 systemd-logind[2054]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:18:49.287386 systemd-logind[2054]: New seat seat0. Jan 17 12:18:49.292624 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:18:49.311248 bash[2140]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:18:49.312337 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:18:49.323558 systemd[1]: Starting sshkeys.service... Jan 17 12:18:49.339759 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:18:49.352309 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:18:49.407977 coreos-metadata[2149]: Jan 17 12:18:49.402 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:18:49.407977 coreos-metadata[2149]: Jan 17 12:18:49.403 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 12:18:49.407977 coreos-metadata[2149]: Jan 17 12:18:49.403 INFO Fetch successful Jan 17 12:18:49.407977 coreos-metadata[2149]: Jan 17 12:18:49.404 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:18:49.425672 amazon-ssm-agent[2109]: Initializing new seelog logger Jan 17 12:18:49.425672 amazon-ssm-agent[2109]: New Seelog Logger Creation Complete Jan 17 12:18:49.425672 amazon-ssm-agent[2109]: 2025/01/17 12:18:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:18:49.425672 amazon-ssm-agent[2109]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:18:49.425672 amazon-ssm-agent[2109]: 2025/01/17 12:18:49 processing appconfig overrides Jan 17 12:18:49.429193 coreos-metadata[2149]: Jan 17 12:18:49.422 INFO Fetch successful Jan 17 12:18:49.429247 amazon-ssm-agent[2109]: 2025/01/17 12:18:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:18:49.429247 amazon-ssm-agent[2109]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:18:49.429247 amazon-ssm-agent[2109]: 2025/01/17 12:18:49 processing appconfig overrides Jan 17 12:18:49.429247 amazon-ssm-agent[2109]: 2025/01/17 12:18:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:18:49.429247 amazon-ssm-agent[2109]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:18:49.429247 amazon-ssm-agent[2109]: 2025/01/17 12:18:49 processing appconfig overrides Jan 17 12:18:49.429247 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO Proxy environment variables: Jan 17 12:18:49.430623 unknown[2149]: wrote ssh authorized keys file for user: core Jan 17 12:18:49.433531 amazon-ssm-agent[2109]: 2025/01/17 12:18:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:18:49.433652 amazon-ssm-agent[2109]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:18:49.433866 amazon-ssm-agent[2109]: 2025/01/17 12:18:49 processing appconfig overrides Jan 17 12:18:49.525914 update-ssh-keys[2161]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:18:49.526142 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:18:49.529662 systemd[1]: Finished sshkeys.service. Jan 17 12:18:49.552515 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO http_proxy: Jan 17 12:18:49.587454 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2162) Jan 17 12:18:49.591006 dbus-daemon[2033]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:18:49.591624 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:18:49.595323 dbus-daemon[2033]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2097 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:18:49.606748 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:18:49.640689 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO no_proxy: Jan 17 12:18:49.683349 polkitd[2179]: Started polkitd version 121 Jan 17 12:18:49.728708 polkitd[2179]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:18:49.728806 polkitd[2179]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:18:49.733965 polkitd[2179]: Finished loading, compiling and executing 2 rules Jan 17 12:18:49.740384 dbus-daemon[2033]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:18:49.740592 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:18:49.741269 polkitd[2179]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:18:49.741835 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO https_proxy: Jan 17 12:18:49.804682 systemd-resolved[1969]: System hostname changed to 'ip-172-31-29-125'. Jan 17 12:18:49.805214 systemd-hostnamed[2097]: Hostname set to (transient) Jan 17 12:18:49.849713 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO Checking if agent identity type OnPrem can be assumed Jan 17 12:18:49.895037 locksmithd[2110]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:18:49.952089 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO Checking if agent identity type EC2 can be assumed Jan 17 12:18:50.009661 containerd[2088]: time="2025-01-17T12:18:50.009540046Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:18:50.050179 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO Agent will take identity from EC2 Jan 17 12:18:50.114393 containerd[2088]: time="2025-01-17T12:18:50.114246952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.118718542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.118786252Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.118827080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.119058629Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.119109375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.119218081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.119240120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.119643135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.119664503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.119683291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120097 containerd[2088]: time="2025-01-17T12:18:50.119697084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120579 containerd[2088]: time="2025-01-17T12:18:50.119807927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120579 containerd[2088]: time="2025-01-17T12:18:50.120103060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120579 containerd[2088]: time="2025-01-17T12:18:50.120363434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:50.120579 containerd[2088]: time="2025-01-17T12:18:50.120406965Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:18:50.120579 containerd[2088]: time="2025-01-17T12:18:50.120534030Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:18:50.120765 containerd[2088]: time="2025-01-17T12:18:50.120609467Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:18:50.127414 containerd[2088]: time="2025-01-17T12:18:50.127369348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:18:50.127534 containerd[2088]: time="2025-01-17T12:18:50.127462065Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:18:50.127534 containerd[2088]: time="2025-01-17T12:18:50.127484743Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:18:50.127623 containerd[2088]: time="2025-01-17T12:18:50.127546579Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:18:50.127623 containerd[2088]: time="2025-01-17T12:18:50.127570408Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:18:50.127771 containerd[2088]: time="2025-01-17T12:18:50.127750174Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:18:50.128305 containerd[2088]: time="2025-01-17T12:18:50.128282158Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:18:50.128461 containerd[2088]: time="2025-01-17T12:18:50.128436171Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:18:50.128506 containerd[2088]: time="2025-01-17T12:18:50.128471143Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:18:50.128506 containerd[2088]: time="2025-01-17T12:18:50.128493248Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:18:50.128592 containerd[2088]: time="2025-01-17T12:18:50.128515055Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:18:50.128592 containerd[2088]: time="2025-01-17T12:18:50.128534966Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:18:50.128592 containerd[2088]: time="2025-01-17T12:18:50.128554987Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:18:50.128592 containerd[2088]: time="2025-01-17T12:18:50.128576639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:18:50.128726 containerd[2088]: time="2025-01-17T12:18:50.128598831Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:18:50.128726 containerd[2088]: time="2025-01-17T12:18:50.128619467Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:18:50.128726 containerd[2088]: time="2025-01-17T12:18:50.128638533Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:18:50.128726 containerd[2088]: time="2025-01-17T12:18:50.128659256Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:18:50.128726 containerd[2088]: time="2025-01-17T12:18:50.128688977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128726 containerd[2088]: time="2025-01-17T12:18:50.128712629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128936 containerd[2088]: time="2025-01-17T12:18:50.128731480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128936 containerd[2088]: time="2025-01-17T12:18:50.128752150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128936 containerd[2088]: time="2025-01-17T12:18:50.128796612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128936 containerd[2088]: time="2025-01-17T12:18:50.128827488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128936 containerd[2088]: time="2025-01-17T12:18:50.128851609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128936 containerd[2088]: time="2025-01-17T12:18:50.128873205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128936 containerd[2088]: time="2025-01-17T12:18:50.128894712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.128936 containerd[2088]: time="2025-01-17T12:18:50.128918093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.129247 containerd[2088]: time="2025-01-17T12:18:50.128937559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.129247 containerd[2088]: time="2025-01-17T12:18:50.128955787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.129247 containerd[2088]: time="2025-01-17T12:18:50.128976028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.129247 containerd[2088]: time="2025-01-17T12:18:50.128998830Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:18:50.129247 containerd[2088]: time="2025-01-17T12:18:50.129031251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.129247 containerd[2088]: time="2025-01-17T12:18:50.129049010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.129065391Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.130263376Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.130294634Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.130312127Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.130330879Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.130346595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.130388731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.130404527Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:18:50.132094 containerd[2088]: time="2025-01-17T12:18:50.130419745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:18:50.132491 containerd[2088]: time="2025-01-17T12:18:50.130826493Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:18:50.132491 containerd[2088]: time="2025-01-17T12:18:50.130908839Z" level=info msg="Connect containerd service" Jan 17 12:18:50.132491 containerd[2088]: time="2025-01-17T12:18:50.130969593Z" level=info msg="using legacy CRI server" Jan 17 12:18:50.132491 containerd[2088]: time="2025-01-17T12:18:50.130979425Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:18:50.132491 containerd[2088]: time="2025-01-17T12:18:50.131153598Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:18:50.132491 containerd[2088]: time="2025-01-17T12:18:50.131813557Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.134762069Z" level=info msg="Start subscribing containerd event" Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.134847248Z" level=info msg="Start recovering state" Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.134959457Z" level=info msg="Start event monitor" Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.134981650Z" level=info msg="Start snapshots syncer" Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.134996257Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.135005650Z" level=info msg="Start streaming server" Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.135124672Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.135205059Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:18:50.137085 containerd[2088]: time="2025-01-17T12:18:50.137035129Z" level=info msg="containerd successfully booted in 0.132096s" Jan 17 12:18:50.135463 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:18:50.149155 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:18:50.248839 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:18:50.325448 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:18:50.325448 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 12:18:50.325448 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 12:18:50.325448 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 12:18:50.325677 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 12:18:50.325677 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [Registrar] Starting registrar module Jan 17 12:18:50.325677 amazon-ssm-agent[2109]: 2025-01-17 12:18:49 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 12:18:50.325677 amazon-ssm-agent[2109]: 2025-01-17 12:18:50 INFO [EC2Identity] EC2 registration was successful. Jan 17 12:18:50.325677 amazon-ssm-agent[2109]: 2025-01-17 12:18:50 INFO [CredentialRefresher] credentialRefresher has started Jan 17 12:18:50.325677 amazon-ssm-agent[2109]: 2025-01-17 12:18:50 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 12:18:50.325677 amazon-ssm-agent[2109]: 2025-01-17 12:18:50 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 12:18:50.348251 amazon-ssm-agent[2109]: 2025-01-17 12:18:50 INFO [CredentialRefresher] Next credential rotation will be in 30.8749940397 minutes Jan 17 12:18:50.436039 sshd_keygen[2084]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:18:50.466253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:18:50.474466 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:18:50.485424 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:18:50.485872 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:18:50.496503 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:18:50.510755 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:18:50.518563 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:18:50.528929 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:18:50.530411 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:18:51.303501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:51.309555 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:18:51.313900 systemd[1]: Startup finished in 9.139s (kernel) + 9.342s (userspace) = 18.481s. Jan 17 12:18:51.361474 amazon-ssm-agent[2109]: 2025-01-17 12:18:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 12:18:51.450144 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:51.462579 amazon-ssm-agent[2109]: 2025-01-17 12:18:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2317) started Jan 17 12:18:51.563106 amazon-ssm-agent[2109]: 2025-01-17 12:18:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 12:18:52.731767 kubelet[2314]: E0117 12:18:52.731654 2314 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:52.734772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:52.735164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:56.332629 systemd-resolved[1969]: Clock change detected. Flushing caches. Jan 17 12:18:57.264477 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:18:57.279675 systemd[1]: Started sshd@0-172.31.29.125:22-139.178.89.65:38272.service - OpenSSH per-connection server daemon (139.178.89.65:38272). Jan 17 12:18:57.462605 sshd[2339]: Accepted publickey for core from 139.178.89.65 port 38272 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:18:57.464288 sshd[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:57.476106 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:18:57.482228 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:18:57.487640 systemd-logind[2054]: New session 1 of user core. Jan 17 12:18:57.501942 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:18:57.513031 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:18:57.530713 (systemd)[2345]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:18:57.656926 systemd[2345]: Queued start job for default target default.target. Jan 17 12:18:57.657686 systemd[2345]: Created slice app.slice - User Application Slice. Jan 17 12:18:57.657800 systemd[2345]: Reached target paths.target - Paths. Jan 17 12:18:57.657910 systemd[2345]: Reached target timers.target - Timers. Jan 17 12:18:57.666767 systemd[2345]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:18:57.676458 systemd[2345]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:18:57.676725 systemd[2345]: Reached target sockets.target - Sockets. Jan 17 12:18:57.676750 systemd[2345]: Reached target basic.target - Basic System. Jan 17 12:18:57.676817 systemd[2345]: Reached target default.target - Main User Target. Jan 17 12:18:57.676857 systemd[2345]: Startup finished in 139ms. Jan 17 12:18:57.677329 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:18:57.690262 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:18:57.839159 systemd[1]: Started sshd@1-172.31.29.125:22-139.178.89.65:38288.service - OpenSSH per-connection server daemon (139.178.89.65:38288). Jan 17 12:18:58.019532 sshd[2357]: Accepted publickey for core from 139.178.89.65 port 38288 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:18:58.022316 sshd[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:58.029684 systemd-logind[2054]: New session 2 of user core. Jan 17 12:18:58.037096 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:18:58.171279 sshd[2357]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:58.182098 systemd[1]: sshd@1-172.31.29.125:22-139.178.89.65:38288.service: Deactivated successfully. Jan 17 12:18:58.199857 systemd-logind[2054]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:18:58.200153 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:18:58.208942 systemd[1]: Started sshd@2-172.31.29.125:22-139.178.89.65:38294.service - OpenSSH per-connection server daemon (139.178.89.65:38294). Jan 17 12:18:58.210918 systemd-logind[2054]: Removed session 2. Jan 17 12:18:58.376527 sshd[2365]: Accepted publickey for core from 139.178.89.65 port 38294 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:18:58.378878 sshd[2365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:58.386480 systemd-logind[2054]: New session 3 of user core. Jan 17 12:18:58.391950 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:18:58.519229 sshd[2365]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:58.524795 systemd[1]: sshd@2-172.31.29.125:22-139.178.89.65:38294.service: Deactivated successfully. Jan 17 12:18:58.533513 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:18:58.534421 systemd-logind[2054]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:18:58.535820 systemd-logind[2054]: Removed session 3. Jan 17 12:18:58.548953 systemd[1]: Started sshd@3-172.31.29.125:22-139.178.89.65:38310.service - OpenSSH per-connection server daemon (139.178.89.65:38310). Jan 17 12:18:58.729581 sshd[2373]: Accepted publickey for core from 139.178.89.65 port 38310 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:18:58.732614 sshd[2373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:58.745451 systemd-logind[2054]: New session 4 of user core. Jan 17 12:18:58.751926 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:18:58.880739 sshd[2373]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:58.887743 systemd[1]: sshd@3-172.31.29.125:22-139.178.89.65:38310.service: Deactivated successfully. Jan 17 12:18:58.894466 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:18:58.895871 systemd-logind[2054]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:18:58.897145 systemd-logind[2054]: Removed session 4. Jan 17 12:18:58.912973 systemd[1]: Started sshd@4-172.31.29.125:22-139.178.89.65:38312.service - OpenSSH per-connection server daemon (139.178.89.65:38312). Jan 17 12:18:59.077220 sshd[2381]: Accepted publickey for core from 139.178.89.65 port 38312 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:18:59.078694 sshd[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:59.092291 systemd-logind[2054]: New session 5 of user core. Jan 17 12:18:59.098913 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:18:59.249996 sudo[2385]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:18:59.250499 sudo[2385]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:59.263154 sudo[2385]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:59.286387 sshd[2381]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:59.291181 systemd[1]: sshd@4-172.31.29.125:22-139.178.89.65:38312.service: Deactivated successfully. Jan 17 12:18:59.297343 systemd-logind[2054]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:18:59.298249 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:18:59.299616 systemd-logind[2054]: Removed session 5. Jan 17 12:18:59.315455 systemd[1]: Started sshd@5-172.31.29.125:22-139.178.89.65:38318.service - OpenSSH per-connection server daemon (139.178.89.65:38318). Jan 17 12:18:59.475335 sshd[2390]: Accepted publickey for core from 139.178.89.65 port 38318 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:18:59.477010 sshd[2390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:59.481846 systemd-logind[2054]: New session 6 of user core. Jan 17 12:18:59.491328 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:18:59.590323 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:18:59.590718 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:59.594922 sudo[2395]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:59.600520 sudo[2394]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:18:59.600989 sudo[2394]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:18:59.615069 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:18:59.618525 auditctl[2398]: No rules Jan 17 12:18:59.618958 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:18:59.619224 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:18:59.627792 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:18:59.669324 augenrules[2417]: No rules Jan 17 12:18:59.671110 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:18:59.674832 sudo[2394]: pam_unix(sudo:session): session closed for user root Jan 17 12:18:59.698440 sshd[2390]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:59.701932 systemd[1]: sshd@5-172.31.29.125:22-139.178.89.65:38318.service: Deactivated successfully. Jan 17 12:18:59.705555 systemd-logind[2054]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:18:59.707366 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:18:59.709229 systemd-logind[2054]: Removed session 6. Jan 17 12:18:59.727135 systemd[1]: Started sshd@6-172.31.29.125:22-139.178.89.65:38334.service - OpenSSH per-connection server daemon (139.178.89.65:38334). Jan 17 12:18:59.885969 sshd[2426]: Accepted publickey for core from 139.178.89.65 port 38334 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:18:59.887005 sshd[2426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:59.893109 systemd-logind[2054]: New session 7 of user core. Jan 17 12:18:59.898924 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:18:59.997386 sudo[2430]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:18:59.997806 sudo[2430]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:01.239773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:01.247430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:01.305683 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-7.scope)... Jan 17 12:19:01.305704 systemd[1]: Reloading... Jan 17 12:19:01.533867 zram_generator::config[2512]: No configuration found. Jan 17 12:19:02.001801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:02.310434 systemd[1]: Reloading finished in 1004 ms. Jan 17 12:19:02.448564 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:19:02.448727 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:19:02.449469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:02.469344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:02.832823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:02.844814 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:19:03.022744 kubelet[2581]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:03.022744 kubelet[2581]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:19:03.022744 kubelet[2581]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:03.024315 kubelet[2581]: I0117 12:19:03.022834 2581 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:19:03.596889 kubelet[2581]: I0117 12:19:03.596850 2581 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:19:03.596889 kubelet[2581]: I0117 12:19:03.596884 2581 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:19:03.597182 kubelet[2581]: I0117 12:19:03.597162 2581 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:19:03.637426 kubelet[2581]: I0117 12:19:03.636762 2581 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:03.661594 kubelet[2581]: I0117 12:19:03.660996 2581 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:19:03.661594 kubelet[2581]: I0117 12:19:03.661443 2581 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:19:03.661905 kubelet[2581]: I0117 12:19:03.661869 2581 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:19:03.663119 kubelet[2581]: I0117 12:19:03.663081 2581 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:19:03.663119 kubelet[2581]: I0117 12:19:03.663113 2581 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:19:03.663277 kubelet[2581]: I0117 12:19:03.663260 2581 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:03.663392 kubelet[2581]: I0117 12:19:03.663380 2581 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:19:03.663450 kubelet[2581]: I0117 12:19:03.663403 2581 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:19:03.663450 kubelet[2581]: I0117 12:19:03.663439 2581 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:19:03.663521 kubelet[2581]: I0117 12:19:03.663460 2581 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:19:03.664086 kubelet[2581]: E0117 12:19:03.664062 2581 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:03.664317 kubelet[2581]: E0117 12:19:03.664291 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:03.666030 kubelet[2581]: I0117 12:19:03.665678 2581 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:19:03.673216 kubelet[2581]: I0117 12:19:03.671102 2581 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:19:03.673216 kubelet[2581]: W0117 12:19:03.671224 2581 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:19:03.674533 kubelet[2581]: I0117 12:19:03.674494 2581 server.go:1256] "Started kubelet" Jan 17 12:19:03.676859 kubelet[2581]: I0117 12:19:03.676831 2581 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:19:03.692191 kubelet[2581]: I0117 12:19:03.691939 2581 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:19:03.703598 kubelet[2581]: I0117 12:19:03.703379 2581 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:19:03.711049 kubelet[2581]: E0117 12:19:03.706985 2581 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.125.181b7a1c663de206 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.125,UID:172.31.29.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.29.125,},FirstTimestamp:2025-01-17 12:19:03.674462726 +0000 UTC m=+0.816944640,LastTimestamp:2025-01-17 12:19:03.674462726 +0000 UTC m=+0.816944640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.125,}" Jan 17 12:19:03.713092 kubelet[2581]: I0117 12:19:03.713037 2581 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:19:03.713092 kubelet[2581]: I0117 12:19:03.713560 2581 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:19:03.716089 kubelet[2581]: I0117 12:19:03.714968 2581 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:19:03.716089 kubelet[2581]: I0117 12:19:03.715891 2581 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:19:03.743163 kubelet[2581]: I0117 12:19:03.734369 2581 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:19:03.756600 kubelet[2581]: I0117 12:19:03.755487 2581 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:19:03.756600 kubelet[2581]: I0117 12:19:03.755718 2581 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:19:03.761634 kubelet[2581]: W0117 12:19:03.761598 2581 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:19:03.761800 kubelet[2581]: E0117 12:19:03.761788 2581 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:19:03.763448 kubelet[2581]: I0117 12:19:03.763427 2581 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:19:03.773917 kubelet[2581]: E0117 12:19:03.773780 2581 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:19:03.815541 kubelet[2581]: W0117 12:19:03.811317 2581 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:19:03.815541 kubelet[2581]: E0117 12:19:03.811367 2581 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:19:03.815541 kubelet[2581]: E0117 12:19:03.811425 2581 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.29.125\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 12:19:03.815541 kubelet[2581]: W0117 12:19:03.811550 2581 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.29.125" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:19:03.815541 kubelet[2581]: E0117 12:19:03.811589 2581 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.29.125" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:19:03.836595 kubelet[2581]: I0117 12:19:03.835074 2581 kubelet_node_status.go:73] "Attempting to register node" node="172.31.29.125" Jan 17 12:19:03.848714 kubelet[2581]: E0117 12:19:03.847118 2581 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.125.181b7a1c6c289fbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.125,UID:172.31.29.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.29.125,},FirstTimestamp:2025-01-17 12:19:03.773732797 +0000 UTC m=+0.916214712,LastTimestamp:2025-01-17 12:19:03.773732797 +0000 UTC m=+0.916214712,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.125,}" Jan 17 12:19:03.887201 kubelet[2581]: I0117 12:19:03.886927 2581 kubelet_node_status.go:76] "Successfully registered node" node="172.31.29.125" Jan 17 12:19:03.889924 kubelet[2581]: I0117 12:19:03.889891 2581 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:19:03.890042 kubelet[2581]: I0117 12:19:03.889979 2581 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:19:03.890042 kubelet[2581]: I0117 12:19:03.890000 2581 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:03.894752 kubelet[2581]: I0117 12:19:03.894671 2581 policy_none.go:49] "None policy: Start" Jan 17 12:19:03.896449 kubelet[2581]: I0117 12:19:03.896077 2581 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:19:03.896449 kubelet[2581]: I0117 12:19:03.896107 2581 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:19:03.899241 kubelet[2581]: E0117 12:19:03.899206 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.125\" not found" Jan 17 12:19:03.903226 kubelet[2581]: I0117 12:19:03.903201 2581 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:19:03.904661 kubelet[2581]: I0117 12:19:03.903644 2581 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:19:03.913287 kubelet[2581]: E0117 12:19:03.913247 2581 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.29.125\" not found" Jan 17 12:19:03.955079 kubelet[2581]: I0117 12:19:03.954963 2581 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:19:03.958152 kubelet[2581]: I0117 12:19:03.957072 2581 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:19:03.958152 kubelet[2581]: I0117 12:19:03.957110 2581 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:19:03.958152 kubelet[2581]: I0117 12:19:03.957136 2581 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:19:03.958152 kubelet[2581]: E0117 12:19:03.957193 2581 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 12:19:04.000356 kubelet[2581]: E0117 12:19:04.000296 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.125\" not found" Jan 17 12:19:04.100614 kubelet[2581]: E0117 12:19:04.100476 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.125\" not found" Jan 17 12:19:04.201565 kubelet[2581]: E0117 12:19:04.201519 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.125\" not found" Jan 17 12:19:04.302259 kubelet[2581]: E0117 12:19:04.302212 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.125\" not found" Jan 17 12:19:04.403240 kubelet[2581]: E0117 12:19:04.403114 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.125\" not found" Jan 17 12:19:04.503849 kubelet[2581]: E0117 12:19:04.503800 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.125\" not found" Jan 17 12:19:04.599586 kubelet[2581]: I0117 12:19:04.599415 2581 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 12:19:04.599804 kubelet[2581]: W0117 12:19:04.599764 2581 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:19:04.604655 kubelet[2581]: E0117 12:19:04.604616 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.125\" not found" Jan 17 12:19:04.664601 kubelet[2581]: E0117 12:19:04.664456 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:04.705612 kubelet[2581]: I0117 12:19:04.705562 2581 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 12:19:04.706226 containerd[2088]: time="2025-01-17T12:19:04.706180738Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:19:04.706817 kubelet[2581]: I0117 12:19:04.706443 2581 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 12:19:05.370190 sudo[2430]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:05.393185 sshd[2426]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:05.398477 systemd[1]: sshd@6-172.31.29.125:22-139.178.89.65:38334.service: Deactivated successfully. Jan 17 12:19:05.407898 systemd-logind[2054]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:19:05.408611 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:19:05.410922 systemd-logind[2054]: Removed session 7. Jan 17 12:19:05.665155 kubelet[2581]: E0117 12:19:05.665042 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:05.666008 kubelet[2581]: I0117 12:19:05.665046 2581 apiserver.go:52] "Watching apiserver" Jan 17 12:19:05.672174 kubelet[2581]: I0117 12:19:05.672125 2581 topology_manager.go:215] "Topology Admit Handler" podUID="40005e3e-90c4-4c19-96de-3c282050e96d" podNamespace="calico-system" podName="calico-node-v2p78" Jan 17 12:19:05.672515 kubelet[2581]: I0117 12:19:05.672455 2581 topology_manager.go:215] "Topology Admit Handler" podUID="595a065e-7a20-4c97-aef2-aaf600d8f6c1" podNamespace="calico-system" podName="csi-node-driver-r24vf" Jan 17 12:19:05.672585 kubelet[2581]: I0117 12:19:05.672526 2581 topology_manager.go:215] "Topology Admit Handler" podUID="783f6409-de93-4b9c-8e12-2ff586c95a18" podNamespace="kube-system" podName="kube-proxy-6bdjd" Jan 17 12:19:05.678326 kubelet[2581]: E0117 12:19:05.677843 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r24vf" podUID="595a065e-7a20-4c97-aef2-aaf600d8f6c1" Jan 17 12:19:05.716594 kubelet[2581]: I0117 12:19:05.716546 2581 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:19:05.735760 kubelet[2581]: I0117 12:19:05.735689 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/783f6409-de93-4b9c-8e12-2ff586c95a18-kube-proxy\") pod \"kube-proxy-6bdjd\" (UID: \"783f6409-de93-4b9c-8e12-2ff586c95a18\") " pod="kube-system/kube-proxy-6bdjd" Jan 17 12:19:05.735760 kubelet[2581]: I0117 12:19:05.735769 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/783f6409-de93-4b9c-8e12-2ff586c95a18-xtables-lock\") pod \"kube-proxy-6bdjd\" (UID: \"783f6409-de93-4b9c-8e12-2ff586c95a18\") " pod="kube-system/kube-proxy-6bdjd" Jan 17 12:19:05.736851 kubelet[2581]: I0117 12:19:05.735871 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl6bx\" (UniqueName: \"kubernetes.io/projected/783f6409-de93-4b9c-8e12-2ff586c95a18-kube-api-access-tl6bx\") pod \"kube-proxy-6bdjd\" (UID: \"783f6409-de93-4b9c-8e12-2ff586c95a18\") " pod="kube-system/kube-proxy-6bdjd" Jan 17 12:19:05.736851 kubelet[2581]: I0117 12:19:05.735952 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-lib-modules\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.736851 kubelet[2581]: I0117 12:19:05.736127 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40005e3e-90c4-4c19-96de-3c282050e96d-tigera-ca-bundle\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.736851 kubelet[2581]: I0117 12:19:05.736426 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/783f6409-de93-4b9c-8e12-2ff586c95a18-lib-modules\") pod \"kube-proxy-6bdjd\" (UID: \"783f6409-de93-4b9c-8e12-2ff586c95a18\") " pod="kube-system/kube-proxy-6bdjd" Jan 17 12:19:05.736851 kubelet[2581]: I0117 12:19:05.736762 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-xtables-lock\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737127 kubelet[2581]: I0117 12:19:05.736822 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-cni-bin-dir\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737127 kubelet[2581]: I0117 12:19:05.736853 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/595a065e-7a20-4c97-aef2-aaf600d8f6c1-kubelet-dir\") pod \"csi-node-driver-r24vf\" (UID: \"595a065e-7a20-4c97-aef2-aaf600d8f6c1\") " pod="calico-system/csi-node-driver-r24vf" Jan 17 12:19:05.737127 kubelet[2581]: I0117 12:19:05.736882 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbzjf\" (UniqueName: \"kubernetes.io/projected/595a065e-7a20-4c97-aef2-aaf600d8f6c1-kube-api-access-rbzjf\") pod \"csi-node-driver-r24vf\" (UID: \"595a065e-7a20-4c97-aef2-aaf600d8f6c1\") " pod="calico-system/csi-node-driver-r24vf" Jan 17 12:19:05.737127 kubelet[2581]: I0117 12:19:05.736967 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-policysync\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737127 kubelet[2581]: I0117 12:19:05.737002 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-var-run-calico\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737397 kubelet[2581]: I0117 12:19:05.737033 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-var-lib-calico\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737397 kubelet[2581]: I0117 12:19:05.737066 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-cni-net-dir\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737397 kubelet[2581]: I0117 12:19:05.737097 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-flexvol-driver-host\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737397 kubelet[2581]: I0117 12:19:05.737144 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pmr\" (UniqueName: \"kubernetes.io/projected/40005e3e-90c4-4c19-96de-3c282050e96d-kube-api-access-h6pmr\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737397 kubelet[2581]: I0117 12:19:05.737190 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/595a065e-7a20-4c97-aef2-aaf600d8f6c1-varrun\") pod \"csi-node-driver-r24vf\" (UID: \"595a065e-7a20-4c97-aef2-aaf600d8f6c1\") " pod="calico-system/csi-node-driver-r24vf" Jan 17 12:19:05.737653 kubelet[2581]: I0117 12:19:05.737626 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/40005e3e-90c4-4c19-96de-3c282050e96d-node-certs\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737717 kubelet[2581]: I0117 12:19:05.737682 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/40005e3e-90c4-4c19-96de-3c282050e96d-cni-log-dir\") pod \"calico-node-v2p78\" (UID: \"40005e3e-90c4-4c19-96de-3c282050e96d\") " pod="calico-system/calico-node-v2p78" Jan 17 12:19:05.737825 kubelet[2581]: I0117 12:19:05.737730 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/595a065e-7a20-4c97-aef2-aaf600d8f6c1-socket-dir\") pod \"csi-node-driver-r24vf\" (UID: \"595a065e-7a20-4c97-aef2-aaf600d8f6c1\") " pod="calico-system/csi-node-driver-r24vf" Jan 17 12:19:05.737825 kubelet[2581]: I0117 12:19:05.737768 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/595a065e-7a20-4c97-aef2-aaf600d8f6c1-registration-dir\") pod \"csi-node-driver-r24vf\" (UID: \"595a065e-7a20-4c97-aef2-aaf600d8f6c1\") " pod="calico-system/csi-node-driver-r24vf" Jan 17 12:19:05.864444 kubelet[2581]: E0117 12:19:05.862840 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:05.864444 kubelet[2581]: W0117 12:19:05.863248 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:05.864444 kubelet[2581]: E0117 12:19:05.863304 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:05.888537 kubelet[2581]: E0117 12:19:05.888241 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:05.888537 kubelet[2581]: W0117 12:19:05.888267 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:05.888537 kubelet[2581]: E0117 12:19:05.888296 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:05.888767 kubelet[2581]: E0117 12:19:05.888609 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:05.888767 kubelet[2581]: W0117 12:19:05.888619 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:05.888767 kubelet[2581]: E0117 12:19:05.888643 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:05.891425 kubelet[2581]: E0117 12:19:05.891241 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:05.891425 kubelet[2581]: W0117 12:19:05.891266 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:05.891425 kubelet[2581]: E0117 12:19:05.891292 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:05.977498 containerd[2088]: time="2025-01-17T12:19:05.977360850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bdjd,Uid:783f6409-de93-4b9c-8e12-2ff586c95a18,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:05.981260 containerd[2088]: time="2025-01-17T12:19:05.981131503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v2p78,Uid:40005e3e-90c4-4c19-96de-3c282050e96d,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:06.582391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843773386.mount: Deactivated successfully. Jan 17 12:19:06.599789 containerd[2088]: time="2025-01-17T12:19:06.599744721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:19:06.600288 containerd[2088]: time="2025-01-17T12:19:06.600248312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:06.601275 containerd[2088]: time="2025-01-17T12:19:06.601240489Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:06.602436 containerd[2088]: time="2025-01-17T12:19:06.602397369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:06.604299 containerd[2088]: time="2025-01-17T12:19:06.603693261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:06.606869 containerd[2088]: time="2025-01-17T12:19:06.605233213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:06.606869 containerd[2088]: time="2025-01-17T12:19:06.606461143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.990218ms" Jan 17 12:19:06.609520 containerd[2088]: time="2025-01-17T12:19:06.609475294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.154084ms" Jan 17 12:19:06.666244 kubelet[2581]: E0117 12:19:06.666214 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:06.912113 containerd[2088]: time="2025-01-17T12:19:06.911902368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:06.913621 containerd[2088]: time="2025-01-17T12:19:06.912982049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:06.913621 containerd[2088]: time="2025-01-17T12:19:06.909644379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:06.913621 containerd[2088]: time="2025-01-17T12:19:06.913295115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:06.913621 containerd[2088]: time="2025-01-17T12:19:06.913351240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:06.913846 containerd[2088]: time="2025-01-17T12:19:06.913612955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:06.913961 containerd[2088]: time="2025-01-17T12:19:06.913831584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:06.914079 containerd[2088]: time="2025-01-17T12:19:06.914027991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:07.178982 containerd[2088]: time="2025-01-17T12:19:07.178761881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bdjd,Uid:783f6409-de93-4b9c-8e12-2ff586c95a18,Namespace:kube-system,Attempt:0,} returns sandbox id \"197207c252b6ecc978b4d0cdf3e7fe460d1feef996463b4cf38e40564b9c6d7b\"" Jan 17 12:19:07.183244 containerd[2088]: time="2025-01-17T12:19:07.183204744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v2p78,Uid:40005e3e-90c4-4c19-96de-3c282050e96d,Namespace:calico-system,Attempt:0,} returns sandbox id \"afa9b03e87b7455eb4b27cde078c3fdc5e92513d05b11eaf3aa41692105b38b3\"" Jan 17 12:19:07.184788 containerd[2088]: time="2025-01-17T12:19:07.184753812Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:19:07.666587 kubelet[2581]: E0117 12:19:07.666419 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:07.958280 kubelet[2581]: E0117 12:19:07.957595 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r24vf" podUID="595a065e-7a20-4c97-aef2-aaf600d8f6c1" Jan 17 12:19:08.423304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789846831.mount: Deactivated successfully. Jan 17 12:19:08.668047 kubelet[2581]: E0117 12:19:08.667974 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:09.119688 containerd[2088]: time="2025-01-17T12:19:09.119635664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:09.120955 containerd[2088]: time="2025-01-17T12:19:09.120799648Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:19:09.123491 containerd[2088]: time="2025-01-17T12:19:09.122433145Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:09.125565 containerd[2088]: time="2025-01-17T12:19:09.125394248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:09.126980 containerd[2088]: time="2025-01-17T12:19:09.126929602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.942133497s" Jan 17 12:19:09.127122 containerd[2088]: time="2025-01-17T12:19:09.127102505Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:19:09.128500 containerd[2088]: time="2025-01-17T12:19:09.128476285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:19:09.129780 containerd[2088]: time="2025-01-17T12:19:09.129745735Z" level=info msg="CreateContainer within sandbox \"197207c252b6ecc978b4d0cdf3e7fe460d1feef996463b4cf38e40564b9c6d7b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:19:09.151710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092251809.mount: Deactivated successfully. Jan 17 12:19:09.170595 containerd[2088]: time="2025-01-17T12:19:09.168536641Z" level=info msg="CreateContainer within sandbox \"197207c252b6ecc978b4d0cdf3e7fe460d1feef996463b4cf38e40564b9c6d7b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2cd982192464e0300c2fb54eba778a82a21efb65f1b3e0467e50cd697b263995\"" Jan 17 12:19:09.171666 containerd[2088]: time="2025-01-17T12:19:09.171634173Z" level=info msg="StartContainer for \"2cd982192464e0300c2fb54eba778a82a21efb65f1b3e0467e50cd697b263995\"" Jan 17 12:19:09.278909 containerd[2088]: time="2025-01-17T12:19:09.278859517Z" level=info msg="StartContainer for \"2cd982192464e0300c2fb54eba778a82a21efb65f1b3e0467e50cd697b263995\" returns successfully" Jan 17 12:19:09.669510 kubelet[2581]: E0117 12:19:09.669010 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:09.958361 kubelet[2581]: E0117 12:19:09.957899 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r24vf" podUID="595a065e-7a20-4c97-aef2-aaf600d8f6c1" Jan 17 12:19:10.055398 kubelet[2581]: E0117 12:19:10.055351 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.055893 kubelet[2581]: W0117 12:19:10.055559 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.055893 kubelet[2581]: E0117 12:19:10.055646 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.059401 kubelet[2581]: E0117 12:19:10.058307 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.059401 kubelet[2581]: W0117 12:19:10.058328 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.059401 kubelet[2581]: E0117 12:19:10.058356 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.061465 kubelet[2581]: E0117 12:19:10.061385 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.061465 kubelet[2581]: W0117 12:19:10.061446 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.062036 kubelet[2581]: E0117 12:19:10.061475 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.064238 kubelet[2581]: E0117 12:19:10.064216 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.064502 kubelet[2581]: W0117 12:19:10.064360 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.064502 kubelet[2581]: E0117 12:19:10.064395 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.065070 kubelet[2581]: E0117 12:19:10.065049 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.065134 kubelet[2581]: W0117 12:19:10.065091 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.065134 kubelet[2581]: E0117 12:19:10.065115 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.065534 kubelet[2581]: E0117 12:19:10.065512 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.065534 kubelet[2581]: W0117 12:19:10.065528 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.065721 kubelet[2581]: E0117 12:19:10.065546 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.065865 kubelet[2581]: E0117 12:19:10.065809 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.065968 kubelet[2581]: W0117 12:19:10.065863 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.065968 kubelet[2581]: E0117 12:19:10.065883 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.066124 kubelet[2581]: E0117 12:19:10.066105 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.066124 kubelet[2581]: W0117 12:19:10.066119 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.066264 kubelet[2581]: E0117 12:19:10.066135 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.066383 kubelet[2581]: E0117 12:19:10.066366 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.066383 kubelet[2581]: W0117 12:19:10.066380 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.066383 kubelet[2581]: E0117 12:19:10.066394 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.066755 kubelet[2581]: E0117 12:19:10.066688 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.066755 kubelet[2581]: W0117 12:19:10.066750 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.068193 kubelet[2581]: E0117 12:19:10.066768 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.068193 kubelet[2581]: E0117 12:19:10.067973 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.068193 kubelet[2581]: W0117 12:19:10.068026 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.068193 kubelet[2581]: E0117 12:19:10.068043 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.068363 kubelet[2581]: E0117 12:19:10.068254 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.068363 kubelet[2581]: W0117 12:19:10.068264 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.068363 kubelet[2581]: E0117 12:19:10.068279 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.068492 kubelet[2581]: E0117 12:19:10.068469 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.068492 kubelet[2581]: W0117 12:19:10.068478 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.069280 kubelet[2581]: E0117 12:19:10.068494 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.069410 kubelet[2581]: E0117 12:19:10.069290 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.069410 kubelet[2581]: W0117 12:19:10.069301 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.069410 kubelet[2581]: E0117 12:19:10.069318 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.070194 kubelet[2581]: E0117 12:19:10.069514 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.070194 kubelet[2581]: W0117 12:19:10.069523 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.070194 kubelet[2581]: E0117 12:19:10.069540 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.070194 kubelet[2581]: E0117 12:19:10.069875 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.070194 kubelet[2581]: W0117 12:19:10.069888 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.070194 kubelet[2581]: E0117 12:19:10.069903 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.070194 kubelet[2581]: E0117 12:19:10.070170 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.070194 kubelet[2581]: W0117 12:19:10.070180 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.070194 kubelet[2581]: E0117 12:19:10.070196 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.071169 kubelet[2581]: E0117 12:19:10.070394 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.071169 kubelet[2581]: W0117 12:19:10.070404 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.071169 kubelet[2581]: E0117 12:19:10.070417 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.071169 kubelet[2581]: E0117 12:19:10.070691 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.071169 kubelet[2581]: W0117 12:19:10.070702 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.071169 kubelet[2581]: E0117 12:19:10.070717 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.071169 kubelet[2581]: E0117 12:19:10.071035 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.071169 kubelet[2581]: W0117 12:19:10.071045 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.071169 kubelet[2581]: E0117 12:19:10.071062 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.077615 kubelet[2581]: E0117 12:19:10.077540 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.077615 kubelet[2581]: W0117 12:19:10.077602 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.077615 kubelet[2581]: E0117 12:19:10.077627 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.078493 kubelet[2581]: E0117 12:19:10.078254 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.078493 kubelet[2581]: W0117 12:19:10.078269 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.078493 kubelet[2581]: E0117 12:19:10.078301 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.078736 kubelet[2581]: E0117 12:19:10.078650 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.078736 kubelet[2581]: W0117 12:19:10.078662 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.078736 kubelet[2581]: E0117 12:19:10.078693 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.079010 kubelet[2581]: E0117 12:19:10.078998 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.079010 kubelet[2581]: W0117 12:19:10.079011 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.079105 kubelet[2581]: E0117 12:19:10.079041 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.079326 kubelet[2581]: E0117 12:19:10.079302 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.079326 kubelet[2581]: W0117 12:19:10.079318 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.079443 kubelet[2581]: E0117 12:19:10.079340 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.079653 kubelet[2581]: E0117 12:19:10.079634 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.079653 kubelet[2581]: W0117 12:19:10.079648 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.079759 kubelet[2581]: E0117 12:19:10.079749 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.080243 kubelet[2581]: E0117 12:19:10.080224 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.080243 kubelet[2581]: W0117 12:19:10.080239 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.080378 kubelet[2581]: E0117 12:19:10.080261 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.080506 kubelet[2581]: E0117 12:19:10.080489 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.080506 kubelet[2581]: W0117 12:19:10.080503 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.080677 kubelet[2581]: E0117 12:19:10.080524 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.080892 kubelet[2581]: E0117 12:19:10.080797 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.080892 kubelet[2581]: W0117 12:19:10.080809 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.080892 kubelet[2581]: E0117 12:19:10.080837 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.081278 kubelet[2581]: E0117 12:19:10.081265 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.081278 kubelet[2581]: W0117 12:19:10.081275 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.081356 kubelet[2581]: E0117 12:19:10.081308 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.081849 kubelet[2581]: E0117 12:19:10.081831 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.082024 kubelet[2581]: W0117 12:19:10.081851 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.082024 kubelet[2581]: E0117 12:19:10.081874 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.082126 kubelet[2581]: E0117 12:19:10.082098 2581 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:10.082126 kubelet[2581]: W0117 12:19:10.082108 2581 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:10.082126 kubelet[2581]: E0117 12:19:10.082123 2581 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:10.408287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004125796.mount: Deactivated successfully. Jan 17 12:19:10.563011 containerd[2088]: time="2025-01-17T12:19:10.562957858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:10.564217 containerd[2088]: time="2025-01-17T12:19:10.564090053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 17 12:19:10.565628 containerd[2088]: time="2025-01-17T12:19:10.565381741Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:10.568288 containerd[2088]: time="2025-01-17T12:19:10.568249495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:10.568934 containerd[2088]: time="2025-01-17T12:19:10.568893706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.439505916s" Jan 17 12:19:10.569013 containerd[2088]: time="2025-01-17T12:19:10.568941270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:19:10.571099 containerd[2088]: time="2025-01-17T12:19:10.571060682Z" level=info msg="CreateContainer within sandbox \"afa9b03e87b7455eb4b27cde078c3fdc5e92513d05b11eaf3aa41692105b38b3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:19:10.590748 containerd[2088]: time="2025-01-17T12:19:10.590698717Z" level=info msg="CreateContainer within sandbox \"afa9b03e87b7455eb4b27cde078c3fdc5e92513d05b11eaf3aa41692105b38b3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"350fbce28a8e2f199b9bc8da121dbd7bcb504ea99f8ef5821a942a1b53dd5c23\"" Jan 17 12:19:10.591462 containerd[2088]: time="2025-01-17T12:19:10.591427537Z" level=info msg="StartContainer for \"350fbce28a8e2f199b9bc8da121dbd7bcb504ea99f8ef5821a942a1b53dd5c23\"" Jan 17 12:19:10.669659 containerd[2088]: time="2025-01-17T12:19:10.669431567Z" level=info msg="StartContainer for \"350fbce28a8e2f199b9bc8da121dbd7bcb504ea99f8ef5821a942a1b53dd5c23\" returns successfully" Jan 17 12:19:10.670268 kubelet[2581]: E0117 12:19:10.669508 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:10.892907 containerd[2088]: time="2025-01-17T12:19:10.892734319Z" level=info msg="shim disconnected" id=350fbce28a8e2f199b9bc8da121dbd7bcb504ea99f8ef5821a942a1b53dd5c23 namespace=k8s.io Jan 17 12:19:10.892907 containerd[2088]: time="2025-01-17T12:19:10.892886837Z" level=warning msg="cleaning up after shim disconnected" id=350fbce28a8e2f199b9bc8da121dbd7bcb504ea99f8ef5821a942a1b53dd5c23 namespace=k8s.io Jan 17 12:19:10.892907 containerd[2088]: time="2025-01-17T12:19:10.892901964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:10.989135 containerd[2088]: time="2025-01-17T12:19:10.988901187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:19:11.026302 kubelet[2581]: I0117 12:19:11.025448 2581 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6bdjd" podStartSLOduration=6.07904087 podStartE2EDuration="8.025393247s" podCreationTimestamp="2025-01-17 12:19:03 +0000 UTC" firstStartedPulling="2025-01-17 12:19:07.1812908 +0000 UTC m=+4.323772695" lastFinishedPulling="2025-01-17 12:19:09.127643177 +0000 UTC m=+6.270125072" observedRunningTime="2025-01-17 12:19:10.013276427 +0000 UTC m=+7.155758337" watchObservedRunningTime="2025-01-17 12:19:11.025393247 +0000 UTC m=+8.167875539" Jan 17 12:19:11.582188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-350fbce28a8e2f199b9bc8da121dbd7bcb504ea99f8ef5821a942a1b53dd5c23-rootfs.mount: Deactivated successfully. Jan 17 12:19:11.670149 kubelet[2581]: E0117 12:19:11.670096 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:11.958549 kubelet[2581]: E0117 12:19:11.957673 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r24vf" podUID="595a065e-7a20-4c97-aef2-aaf600d8f6c1" Jan 17 12:19:12.670630 kubelet[2581]: E0117 12:19:12.670585 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:13.671628 kubelet[2581]: E0117 12:19:13.671529 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:13.958196 kubelet[2581]: E0117 12:19:13.957713 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r24vf" podUID="595a065e-7a20-4c97-aef2-aaf600d8f6c1" Jan 17 12:19:14.672455 kubelet[2581]: E0117 12:19:14.672316 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:15.005909 containerd[2088]: time="2025-01-17T12:19:15.005765692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.007464 containerd[2088]: time="2025-01-17T12:19:15.007402863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:19:15.008676 containerd[2088]: time="2025-01-17T12:19:15.008631701Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.011455 containerd[2088]: time="2025-01-17T12:19:15.011414066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.012282 containerd[2088]: time="2025-01-17T12:19:15.012249366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.023313227s" Jan 17 12:19:15.012282 containerd[2088]: time="2025-01-17T12:19:15.012285293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:19:15.014325 containerd[2088]: time="2025-01-17T12:19:15.014292342Z" level=info msg="CreateContainer within sandbox \"afa9b03e87b7455eb4b27cde078c3fdc5e92513d05b11eaf3aa41692105b38b3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:19:15.030088 containerd[2088]: time="2025-01-17T12:19:15.030044057Z" level=info msg="CreateContainer within sandbox \"afa9b03e87b7455eb4b27cde078c3fdc5e92513d05b11eaf3aa41692105b38b3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ed616cf1227af849054bc16ace068832dd3c8198fcc902c4ea4bf97b3d9e020d\"" Jan 17 12:19:15.032508 containerd[2088]: time="2025-01-17T12:19:15.031029260Z" level=info msg="StartContainer for \"ed616cf1227af849054bc16ace068832dd3c8198fcc902c4ea4bf97b3d9e020d\"" Jan 17 12:19:15.100658 containerd[2088]: time="2025-01-17T12:19:15.100368977Z" level=info msg="StartContainer for \"ed616cf1227af849054bc16ace068832dd3c8198fcc902c4ea4bf97b3d9e020d\" returns successfully" Jan 17 12:19:15.673202 kubelet[2581]: E0117 12:19:15.673161 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:15.844168 containerd[2088]: time="2025-01-17T12:19:15.843789303Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:19:15.881460 kubelet[2581]: I0117 12:19:15.880095 2581 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:19:15.895989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed616cf1227af849054bc16ace068832dd3c8198fcc902c4ea4bf97b3d9e020d-rootfs.mount: Deactivated successfully. Jan 17 12:19:15.989255 containerd[2088]: time="2025-01-17T12:19:15.988202710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r24vf,Uid:595a065e-7a20-4c97-aef2-aaf600d8f6c1,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:16.402463 containerd[2088]: time="2025-01-17T12:19:16.402304788Z" level=info msg="shim disconnected" id=ed616cf1227af849054bc16ace068832dd3c8198fcc902c4ea4bf97b3d9e020d namespace=k8s.io Jan 17 12:19:16.402463 containerd[2088]: time="2025-01-17T12:19:16.402369277Z" level=warning msg="cleaning up after shim disconnected" id=ed616cf1227af849054bc16ace068832dd3c8198fcc902c4ea4bf97b3d9e020d namespace=k8s.io Jan 17 12:19:16.402463 containerd[2088]: time="2025-01-17T12:19:16.402383217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:16.430165 containerd[2088]: time="2025-01-17T12:19:16.430090992Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:19:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:19:16.463344 containerd[2088]: time="2025-01-17T12:19:16.463269346Z" level=error msg="Failed to destroy network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:16.467781 containerd[2088]: time="2025-01-17T12:19:16.466661209Z" level=error msg="encountered an error cleaning up failed sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:16.467781 containerd[2088]: time="2025-01-17T12:19:16.466753994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r24vf,Uid:595a065e-7a20-4c97-aef2-aaf600d8f6c1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:16.468520 kubelet[2581]: E0117 12:19:16.468061 2581 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:16.468520 kubelet[2581]: E0117 12:19:16.468369 2581 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r24vf" Jan 17 12:19:16.468520 kubelet[2581]: E0117 12:19:16.468465 2581 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r24vf" Jan 17 12:19:16.469582 kubelet[2581]: E0117 12:19:16.468548 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r24vf_calico-system(595a065e-7a20-4c97-aef2-aaf600d8f6c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r24vf_calico-system(595a065e-7a20-4c97-aef2-aaf600d8f6c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r24vf" podUID="595a065e-7a20-4c97-aef2-aaf600d8f6c1" Jan 17 12:19:16.472021 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51-shm.mount: Deactivated successfully. Jan 17 12:19:16.674071 kubelet[2581]: E0117 12:19:16.673935 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:17.014060 containerd[2088]: time="2025-01-17T12:19:17.012804969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:19:17.014764 kubelet[2581]: I0117 12:19:17.014732 2581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:19:17.015520 containerd[2088]: time="2025-01-17T12:19:17.015481663Z" level=info msg="StopPodSandbox for \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\"" Jan 17 12:19:17.016782 containerd[2088]: time="2025-01-17T12:19:17.015795629Z" level=info msg="Ensure that sandbox fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51 in task-service has been cleanup successfully" Jan 17 12:19:17.055813 containerd[2088]: time="2025-01-17T12:19:17.055763725Z" level=error msg="StopPodSandbox for \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\" failed" error="failed to destroy network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:17.056097 kubelet[2581]: E0117 12:19:17.056067 2581 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:19:17.056198 kubelet[2581]: E0117 12:19:17.056156 2581 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51"} Jan 17 12:19:17.056241 kubelet[2581]: E0117 12:19:17.056209 2581 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"595a065e-7a20-4c97-aef2-aaf600d8f6c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:17.056322 kubelet[2581]: E0117 12:19:17.056252 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"595a065e-7a20-4c97-aef2-aaf600d8f6c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r24vf" podUID="595a065e-7a20-4c97-aef2-aaf600d8f6c1" Jan 17 12:19:17.674719 kubelet[2581]: E0117 12:19:17.674666 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:18.675663 kubelet[2581]: E0117 12:19:18.675492 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:19.269376 kubelet[2581]: I0117 12:19:19.269305 2581 topology_manager.go:215] "Topology Admit Handler" podUID="b2ada490-3902-4cde-9e69-9ef4430ae355" podNamespace="default" podName="nginx-deployment-6d5f899847-7xb6d" Jan 17 12:19:19.346802 kubelet[2581]: I0117 12:19:19.346767 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfzpw\" (UniqueName: \"kubernetes.io/projected/b2ada490-3902-4cde-9e69-9ef4430ae355-kube-api-access-qfzpw\") pod \"nginx-deployment-6d5f899847-7xb6d\" (UID: \"b2ada490-3902-4cde-9e69-9ef4430ae355\") " pod="default/nginx-deployment-6d5f899847-7xb6d" Jan 17 12:19:19.575539 containerd[2088]: time="2025-01-17T12:19:19.575422376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7xb6d,Uid:b2ada490-3902-4cde-9e69-9ef4430ae355,Namespace:default,Attempt:0,}" Jan 17 12:19:19.675822 kubelet[2581]: E0117 12:19:19.675759 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:19.743849 containerd[2088]: time="2025-01-17T12:19:19.743798546Z" level=error msg="Failed to destroy network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:19.746590 containerd[2088]: time="2025-01-17T12:19:19.744874241Z" level=error msg="encountered an error cleaning up failed sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:19.746590 containerd[2088]: time="2025-01-17T12:19:19.744940346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7xb6d,Uid:b2ada490-3902-4cde-9e69-9ef4430ae355,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:19.746909 kubelet[2581]: E0117 12:19:19.745228 2581 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:19.746909 kubelet[2581]: E0117 12:19:19.745288 2581 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7xb6d" Jan 17 12:19:19.746909 kubelet[2581]: E0117 12:19:19.745322 2581 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-7xb6d" Jan 17 12:19:19.748599 kubelet[2581]: E0117 12:19:19.747212 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-7xb6d_default(b2ada490-3902-4cde-9e69-9ef4430ae355)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-7xb6d_default(b2ada490-3902-4cde-9e69-9ef4430ae355)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7xb6d" podUID="b2ada490-3902-4cde-9e69-9ef4430ae355" Jan 17 12:19:19.752054 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33-shm.mount: Deactivated successfully. Jan 17 12:19:20.027679 kubelet[2581]: I0117 12:19:20.027251 2581 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:19:20.028792 containerd[2088]: time="2025-01-17T12:19:20.028630682Z" level=info msg="StopPodSandbox for \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\"" Jan 17 12:19:20.029889 containerd[2088]: time="2025-01-17T12:19:20.029108133Z" level=info msg="Ensure that sandbox 3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33 in task-service has been cleanup successfully" Jan 17 12:19:20.101786 containerd[2088]: time="2025-01-17T12:19:20.101731496Z" level=error msg="StopPodSandbox for \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\" failed" error="failed to destroy network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:20.102689 kubelet[2581]: E0117 12:19:20.102653 2581 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:19:20.103329 kubelet[2581]: E0117 12:19:20.102901 2581 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33"} Jan 17 12:19:20.103329 kubelet[2581]: E0117 12:19:20.102982 2581 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b2ada490-3902-4cde-9e69-9ef4430ae355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:20.103329 kubelet[2581]: E0117 12:19:20.103292 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b2ada490-3902-4cde-9e69-9ef4430ae355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-7xb6d" podUID="b2ada490-3902-4cde-9e69-9ef4430ae355" Jan 17 12:19:20.322526 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:19:20.676477 kubelet[2581]: E0117 12:19:20.676173 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:21.676382 kubelet[2581]: E0117 12:19:21.676345 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:22.677473 kubelet[2581]: E0117 12:19:22.677438 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:23.663977 kubelet[2581]: E0117 12:19:23.663937 2581 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:23.679531 kubelet[2581]: E0117 12:19:23.679312 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:24.363414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633379062.mount: Deactivated successfully. Jan 17 12:19:24.424502 containerd[2088]: time="2025-01-17T12:19:24.424441060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:24.425751 containerd[2088]: time="2025-01-17T12:19:24.425593437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:19:24.427600 containerd[2088]: time="2025-01-17T12:19:24.426625695Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:24.428961 containerd[2088]: time="2025-01-17T12:19:24.428909194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:24.429676 containerd[2088]: time="2025-01-17T12:19:24.429473305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.416569837s" Jan 17 12:19:24.429676 containerd[2088]: time="2025-01-17T12:19:24.429517091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:19:24.481502 containerd[2088]: time="2025-01-17T12:19:24.481460706Z" level=info msg="CreateContainer within sandbox \"afa9b03e87b7455eb4b27cde078c3fdc5e92513d05b11eaf3aa41692105b38b3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:19:24.522059 containerd[2088]: time="2025-01-17T12:19:24.522009494Z" level=info msg="CreateContainer within sandbox \"afa9b03e87b7455eb4b27cde078c3fdc5e92513d05b11eaf3aa41692105b38b3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"104850d0e0cff1175167151e595eb35f071bd25f26e3c3c9cb21789c8d771132\"" Jan 17 12:19:24.522640 containerd[2088]: time="2025-01-17T12:19:24.522591053Z" level=info msg="StartContainer for \"104850d0e0cff1175167151e595eb35f071bd25f26e3c3c9cb21789c8d771132\"" Jan 17 12:19:24.681288 kubelet[2581]: E0117 12:19:24.680065 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:24.683731 containerd[2088]: time="2025-01-17T12:19:24.681636021Z" level=info msg="StartContainer for \"104850d0e0cff1175167151e595eb35f071bd25f26e3c3c9cb21789c8d771132\" returns successfully" Jan 17 12:19:24.829737 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:19:24.829888 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:19:25.167303 kubelet[2581]: I0117 12:19:25.166248 2581 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-v2p78" podStartSLOduration=4.921444406 podStartE2EDuration="22.166190671s" podCreationTimestamp="2025-01-17 12:19:03 +0000 UTC" firstStartedPulling="2025-01-17 12:19:07.185161057 +0000 UTC m=+4.327642956" lastFinishedPulling="2025-01-17 12:19:24.429907325 +0000 UTC m=+21.572389221" observedRunningTime="2025-01-17 12:19:25.165125104 +0000 UTC m=+22.307607027" watchObservedRunningTime="2025-01-17 12:19:25.166190671 +0000 UTC m=+22.308672580" Jan 17 12:19:25.681184 kubelet[2581]: E0117 12:19:25.681127 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:26.682784 kubelet[2581]: E0117 12:19:26.682718 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:26.878611 kernel: bpftool[3402]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:19:27.198743 systemd-networkd[1647]: vxlan.calico: Link UP Jan 17 12:19:27.198752 systemd-networkd[1647]: vxlan.calico: Gained carrier Jan 17 12:19:27.204225 (udev-worker)[3172]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:19:27.251207 (udev-worker)[3431]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:19:27.683190 kubelet[2581]: E0117 12:19:27.683074 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:28.307488 systemd-networkd[1647]: vxlan.calico: Gained IPv6LL Jan 17 12:19:28.683602 kubelet[2581]: E0117 12:19:28.683481 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:29.684679 kubelet[2581]: E0117 12:19:29.684619 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:30.332617 ntpd[2039]: Listen normally on 6 vxlan.calico 192.168.108.64:123 Jan 17 12:19:30.332733 ntpd[2039]: Listen normally on 7 vxlan.calico [fe80::6406:ebff:feb2:981%3]:123 Jan 17 12:19:30.333184 ntpd[2039]: 17 Jan 12:19:30 ntpd[2039]: Listen normally on 6 vxlan.calico 192.168.108.64:123 Jan 17 12:19:30.333184 ntpd[2039]: 17 Jan 12:19:30 ntpd[2039]: Listen normally on 7 vxlan.calico [fe80::6406:ebff:feb2:981%3]:123 Jan 17 12:19:30.685659 kubelet[2581]: E0117 12:19:30.685505 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:30.958736 containerd[2088]: time="2025-01-17T12:19:30.958470997Z" level=info msg="StopPodSandbox for \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\"" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.127 [INFO][3488] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.128 [INFO][3488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" iface="eth0" netns="/var/run/netns/cni-c7217e10-730a-6ee2-c18f-8452c484e3db" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.128 [INFO][3488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" iface="eth0" netns="/var/run/netns/cni-c7217e10-730a-6ee2-c18f-8452c484e3db" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.131 [INFO][3488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" iface="eth0" netns="/var/run/netns/cni-c7217e10-730a-6ee2-c18f-8452c484e3db" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.131 [INFO][3488] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.131 [INFO][3488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.268 [INFO][3494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.268 [INFO][3494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.268 [INFO][3494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.283 [WARNING][3494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.283 [INFO][3494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.287 [INFO][3494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:31.301891 containerd[2088]: 2025-01-17 12:19:31.296 [INFO][3488] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:19:31.302712 containerd[2088]: time="2025-01-17T12:19:31.301996877Z" level=info msg="TearDown network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\" successfully" Jan 17 12:19:31.302712 containerd[2088]: time="2025-01-17T12:19:31.302030110Z" level=info msg="StopPodSandbox for \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\" returns successfully" Jan 17 12:19:31.307605 containerd[2088]: time="2025-01-17T12:19:31.307255826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7xb6d,Uid:b2ada490-3902-4cde-9e69-9ef4430ae355,Namespace:default,Attempt:1,}" Jan 17 12:19:31.314090 systemd[1]: run-netns-cni\x2dc7217e10\x2d730a\x2d6ee2\x2dc18f\x2d8452c484e3db.mount: Deactivated successfully. Jan 17 12:19:31.600748 systemd-networkd[1647]: cali23c81d6a3b1: Link UP Jan 17 12:19:31.602680 systemd-networkd[1647]: cali23c81d6a3b1: Gained carrier Jan 17 12:19:31.603750 (udev-worker)[3518]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.449 [INFO][3500] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0 nginx-deployment-6d5f899847- default b2ada490-3902-4cde-9e69-9ef4430ae355 1021 0 2025-01-17 12:19:19 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.29.125 nginx-deployment-6d5f899847-7xb6d eth0 default [] [] [kns.default ksa.default.default] cali23c81d6a3b1 [] []}} ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Namespace="default" Pod="nginx-deployment-6d5f899847-7xb6d" WorkloadEndpoint="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.450 [INFO][3500] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Namespace="default" Pod="nginx-deployment-6d5f899847-7xb6d" WorkloadEndpoint="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.500 [INFO][3511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" HandleID="k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.524 [INFO][3511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" HandleID="k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291760), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.125", "pod":"nginx-deployment-6d5f899847-7xb6d", "timestamp":"2025-01-17 12:19:31.50061498 +0000 UTC"}, Hostname:"172.31.29.125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.524 [INFO][3511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.524 [INFO][3511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.524 [INFO][3511] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.125' Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.528 [INFO][3511] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.538 [INFO][3511] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.553 [INFO][3511] ipam/ipam.go 489: Trying affinity for 192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.559 [INFO][3511] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.564 [INFO][3511] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.564 [INFO][3511] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.567 [INFO][3511] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634 Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.583 [INFO][3511] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.592 [INFO][3511] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.65/26] block=192.168.108.64/26 handle="k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.592 [INFO][3511] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.65/26] handle="k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" host="172.31.29.125" Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.592 [INFO][3511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:31.634338 containerd[2088]: 2025-01-17 12:19:31.592 [INFO][3511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.65/26] IPv6=[] ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" HandleID="k8s-pod-network.6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.636089 containerd[2088]: 2025-01-17 12:19:31.594 [INFO][3500] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Namespace="default" Pod="nginx-deployment-6d5f899847-7xb6d" WorkloadEndpoint="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"b2ada490-3902-4cde-9e69-9ef4430ae355", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"", Pod:"nginx-deployment-6d5f899847-7xb6d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali23c81d6a3b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:31.636089 containerd[2088]: 2025-01-17 12:19:31.594 [INFO][3500] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.65/32] ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Namespace="default" Pod="nginx-deployment-6d5f899847-7xb6d" WorkloadEndpoint="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.636089 containerd[2088]: 2025-01-17 12:19:31.594 [INFO][3500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23c81d6a3b1 ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Namespace="default" Pod="nginx-deployment-6d5f899847-7xb6d" WorkloadEndpoint="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.636089 containerd[2088]: 2025-01-17 12:19:31.604 [INFO][3500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Namespace="default" Pod="nginx-deployment-6d5f899847-7xb6d" WorkloadEndpoint="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.636089 containerd[2088]: 2025-01-17 12:19:31.609 [INFO][3500] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Namespace="default" Pod="nginx-deployment-6d5f899847-7xb6d" WorkloadEndpoint="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"b2ada490-3902-4cde-9e69-9ef4430ae355", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634", Pod:"nginx-deployment-6d5f899847-7xb6d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali23c81d6a3b1", MAC:"de:ce:0f:8a:5e:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:31.636089 containerd[2088]: 2025-01-17 12:19:31.631 [INFO][3500] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634" Namespace="default" Pod="nginx-deployment-6d5f899847-7xb6d" WorkloadEndpoint="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:19:31.679304 containerd[2088]: time="2025-01-17T12:19:31.679026340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:31.679304 containerd[2088]: time="2025-01-17T12:19:31.679103455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:31.679304 containerd[2088]: time="2025-01-17T12:19:31.679124064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:31.680316 containerd[2088]: time="2025-01-17T12:19:31.679995510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:31.688161 kubelet[2581]: E0117 12:19:31.686380 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:31.782483 containerd[2088]: time="2025-01-17T12:19:31.782438455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7xb6d,Uid:b2ada490-3902-4cde-9e69-9ef4430ae355,Namespace:default,Attempt:1,} returns sandbox id \"6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634\"" Jan 17 12:19:31.784974 containerd[2088]: time="2025-01-17T12:19:31.784902217Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:19:31.959796 containerd[2088]: time="2025-01-17T12:19:31.959744769Z" level=info msg="StopPodSandbox for \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\"" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.045 [INFO][3586] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.045 [INFO][3586] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" iface="eth0" netns="/var/run/netns/cni-d235bbf6-4dec-73fc-eece-33e015920708" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.045 [INFO][3586] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" iface="eth0" netns="/var/run/netns/cni-d235bbf6-4dec-73fc-eece-33e015920708" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.045 [INFO][3586] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" iface="eth0" netns="/var/run/netns/cni-d235bbf6-4dec-73fc-eece-33e015920708" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.045 [INFO][3586] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.045 [INFO][3586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.077 [INFO][3592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.077 [INFO][3592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.077 [INFO][3592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.086 [WARNING][3592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.086 [INFO][3592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.089 [INFO][3592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:32.094344 containerd[2088]: 2025-01-17 12:19:32.092 [INFO][3586] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:19:32.096188 containerd[2088]: time="2025-01-17T12:19:32.096120605Z" level=info msg="TearDown network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\" successfully" Jan 17 12:19:32.096188 containerd[2088]: time="2025-01-17T12:19:32.096161067Z" level=info msg="StopPodSandbox for \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\" returns successfully" Jan 17 12:19:32.099059 containerd[2088]: time="2025-01-17T12:19:32.098298734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r24vf,Uid:595a065e-7a20-4c97-aef2-aaf600d8f6c1,Namespace:calico-system,Attempt:1,}" Jan 17 12:19:32.100533 systemd[1]: run-netns-cni\x2dd235bbf6\x2d4dec\x2d73fc\x2deece\x2d33e015920708.mount: Deactivated successfully. Jan 17 12:19:32.319244 (udev-worker)[3520]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:19:32.319797 systemd-networkd[1647]: caliaff0e398840: Link UP Jan 17 12:19:32.321719 systemd-networkd[1647]: caliaff0e398840: Gained carrier Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.178 [INFO][3598] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.125-k8s-csi--node--driver--r24vf-eth0 csi-node-driver- calico-system 595a065e-7a20-4c97-aef2-aaf600d8f6c1 1028 0 2025-01-17 12:19:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.29.125 csi-node-driver-r24vf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaff0e398840 [] []}} ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Namespace="calico-system" Pod="csi-node-driver-r24vf" WorkloadEndpoint="172.31.29.125-k8s-csi--node--driver--r24vf-" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.178 [INFO][3598] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Namespace="calico-system" Pod="csi-node-driver-r24vf" WorkloadEndpoint="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.225 [INFO][3609] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" HandleID="k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.259 [INFO][3609] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" HandleID="k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b30), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.29.125", "pod":"csi-node-driver-r24vf", "timestamp":"2025-01-17 12:19:32.225055391 +0000 UTC"}, Hostname:"172.31.29.125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.259 [INFO][3609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.259 [INFO][3609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.259 [INFO][3609] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.125' Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.263 [INFO][3609] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.273 [INFO][3609] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.290 [INFO][3609] ipam/ipam.go 489: Trying affinity for 192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.294 [INFO][3609] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.297 [INFO][3609] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.297 [INFO][3609] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.300 [INFO][3609] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328 Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.307 [INFO][3609] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.314 [INFO][3609] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.66/26] block=192.168.108.64/26 handle="k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.314 [INFO][3609] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.66/26] handle="k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" host="172.31.29.125" Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.314 [INFO][3609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:32.339724 containerd[2088]: 2025-01-17 12:19:32.314 [INFO][3609] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.66/26] IPv6=[] ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" HandleID="k8s-pod-network.88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.341528 containerd[2088]: 2025-01-17 12:19:32.316 [INFO][3598] cni-plugin/k8s.go 386: Populated endpoint ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Namespace="calico-system" Pod="csi-node-driver-r24vf" WorkloadEndpoint="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-csi--node--driver--r24vf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"595a065e-7a20-4c97-aef2-aaf600d8f6c1", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"", Pod:"csi-node-driver-r24vf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaff0e398840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:32.341528 containerd[2088]: 2025-01-17 12:19:32.316 [INFO][3598] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.66/32] ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Namespace="calico-system" Pod="csi-node-driver-r24vf" WorkloadEndpoint="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.341528 containerd[2088]: 2025-01-17 12:19:32.316 [INFO][3598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaff0e398840 ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Namespace="calico-system" Pod="csi-node-driver-r24vf" WorkloadEndpoint="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.341528 containerd[2088]: 2025-01-17 12:19:32.322 [INFO][3598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Namespace="calico-system" Pod="csi-node-driver-r24vf" WorkloadEndpoint="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.341528 containerd[2088]: 2025-01-17 12:19:32.323 [INFO][3598] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Namespace="calico-system" Pod="csi-node-driver-r24vf" WorkloadEndpoint="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-csi--node--driver--r24vf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"595a065e-7a20-4c97-aef2-aaf600d8f6c1", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328", Pod:"csi-node-driver-r24vf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaff0e398840", MAC:"ca:81:9f:3e:3d:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:32.341528 containerd[2088]: 2025-01-17 12:19:32.337 [INFO][3598] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328" Namespace="calico-system" Pod="csi-node-driver-r24vf" WorkloadEndpoint="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:19:32.373137 containerd[2088]: time="2025-01-17T12:19:32.372610534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:32.373137 containerd[2088]: time="2025-01-17T12:19:32.372748525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:32.373137 containerd[2088]: time="2025-01-17T12:19:32.372766404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:32.373137 containerd[2088]: time="2025-01-17T12:19:32.372867243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:32.402188 systemd[1]: run-containerd-runc-k8s.io-88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328-runc.vFq1xo.mount: Deactivated successfully. Jan 17 12:19:32.432020 containerd[2088]: time="2025-01-17T12:19:32.431988122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r24vf,Uid:595a065e-7a20-4c97-aef2-aaf600d8f6c1,Namespace:calico-system,Attempt:1,} returns sandbox id \"88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328\"" Jan 17 12:19:32.687346 kubelet[2581]: E0117 12:19:32.687297 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:33.364311 systemd-networkd[1647]: cali23c81d6a3b1: Gained IPv6LL Jan 17 12:19:33.688313 kubelet[2581]: E0117 12:19:33.688257 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:33.939371 systemd-networkd[1647]: caliaff0e398840: Gained IPv6LL Jan 17 12:19:34.329083 update_engine[2060]: I20250117 12:19:34.328874 2060 update_attempter.cc:509] Updating boot flags... Jan 17 12:19:34.410664 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3693) Jan 17 12:19:34.689100 kubelet[2581]: E0117 12:19:34.689022 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:35.063269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748907916.mount: Deactivated successfully. Jan 17 12:19:35.690253 kubelet[2581]: E0117 12:19:35.690215 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:36.332497 ntpd[2039]: Listen normally on 8 cali23c81d6a3b1 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 12:19:36.334186 ntpd[2039]: 17 Jan 12:19:36 ntpd[2039]: Listen normally on 8 cali23c81d6a3b1 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 12:19:36.334186 ntpd[2039]: 17 Jan 12:19:36 ntpd[2039]: Listen normally on 9 caliaff0e398840 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:19:36.332659 ntpd[2039]: Listen normally on 9 caliaff0e398840 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:19:36.669800 containerd[2088]: time="2025-01-17T12:19:36.669745980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.671289 containerd[2088]: time="2025-01-17T12:19:36.671236877Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 12:19:36.673404 containerd[2088]: time="2025-01-17T12:19:36.672232792Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.681311 containerd[2088]: time="2025-01-17T12:19:36.681256667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.683598 containerd[2088]: time="2025-01-17T12:19:36.683525407Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.898404003s" Jan 17 12:19:36.683598 containerd[2088]: time="2025-01-17T12:19:36.683588851Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:19:36.685015 containerd[2088]: time="2025-01-17T12:19:36.684967260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:19:36.690365 kubelet[2581]: E0117 12:19:36.690313 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:36.710852 containerd[2088]: time="2025-01-17T12:19:36.710809791Z" level=info msg="CreateContainer within sandbox \"6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 12:19:36.730893 containerd[2088]: time="2025-01-17T12:19:36.730840571Z" level=info msg="CreateContainer within sandbox \"6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"79b3e6d7ac98a87ef43047aededaa807d3c83ddc48da16929b3ee2d908f40dbb\"" Jan 17 12:19:36.731874 containerd[2088]: time="2025-01-17T12:19:36.731718482Z" level=info msg="StartContainer for \"79b3e6d7ac98a87ef43047aededaa807d3c83ddc48da16929b3ee2d908f40dbb\"" Jan 17 12:19:36.816100 containerd[2088]: time="2025-01-17T12:19:36.815835668Z" level=info msg="StartContainer for \"79b3e6d7ac98a87ef43047aededaa807d3c83ddc48da16929b3ee2d908f40dbb\" returns successfully" Jan 17 12:19:37.691125 kubelet[2581]: E0117 12:19:37.691067 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:38.660435 containerd[2088]: time="2025-01-17T12:19:38.660386068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.661650 containerd[2088]: time="2025-01-17T12:19:38.661598748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:19:38.663828 containerd[2088]: time="2025-01-17T12:19:38.662615523Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.665700 containerd[2088]: time="2025-01-17T12:19:38.665474325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.666353 containerd[2088]: time="2025-01-17T12:19:38.666316991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.98131592s" Jan 17 12:19:38.666440 containerd[2088]: time="2025-01-17T12:19:38.666352627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:19:38.668758 containerd[2088]: time="2025-01-17T12:19:38.668723676Z" level=info msg="CreateContainer within sandbox \"88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:19:38.691797 kubelet[2581]: E0117 12:19:38.691737 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:38.730330 containerd[2088]: time="2025-01-17T12:19:38.730287834Z" level=info msg="CreateContainer within sandbox \"88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c3024aa11c66667b2b4467a3f0939560fad7aef57b2989bc78292f8028461489\"" Jan 17 12:19:38.731405 containerd[2088]: time="2025-01-17T12:19:38.731374877Z" level=info msg="StartContainer for \"c3024aa11c66667b2b4467a3f0939560fad7aef57b2989bc78292f8028461489\"" Jan 17 12:19:38.831178 containerd[2088]: time="2025-01-17T12:19:38.831079844Z" level=info msg="StartContainer for \"c3024aa11c66667b2b4467a3f0939560fad7aef57b2989bc78292f8028461489\" returns successfully" Jan 17 12:19:38.833054 containerd[2088]: time="2025-01-17T12:19:38.832855716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:19:39.692243 kubelet[2581]: E0117 12:19:39.692195 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:40.251925 containerd[2088]: time="2025-01-17T12:19:40.251830183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:40.253412 containerd[2088]: time="2025-01-17T12:19:40.253279790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:19:40.254634 containerd[2088]: time="2025-01-17T12:19:40.254600338Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:40.257158 containerd[2088]: time="2025-01-17T12:19:40.257099419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:40.257954 containerd[2088]: time="2025-01-17T12:19:40.257834284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.424941373s" Jan 17 12:19:40.257954 containerd[2088]: time="2025-01-17T12:19:40.257889921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:19:40.260836 containerd[2088]: time="2025-01-17T12:19:40.260799925Z" level=info msg="CreateContainer within sandbox \"88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:19:40.299764 containerd[2088]: time="2025-01-17T12:19:40.299334481Z" level=info msg="CreateContainer within sandbox \"88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"39cc04a4844a606a8e3774aa8ce1a6f60e81b14684d1fed69d1060bc5f4eda23\"" Jan 17 12:19:40.304015 containerd[2088]: time="2025-01-17T12:19:40.303247165Z" level=info msg="StartContainer for \"39cc04a4844a606a8e3774aa8ce1a6f60e81b14684d1fed69d1060bc5f4eda23\"" Jan 17 12:19:40.402464 systemd[1]: run-containerd-runc-k8s.io-39cc04a4844a606a8e3774aa8ce1a6f60e81b14684d1fed69d1060bc5f4eda23-runc.C4cZCE.mount: Deactivated successfully. Jan 17 12:19:40.452415 containerd[2088]: time="2025-01-17T12:19:40.451926658Z" level=info msg="StartContainer for \"39cc04a4844a606a8e3774aa8ce1a6f60e81b14684d1fed69d1060bc5f4eda23\" returns successfully" Jan 17 12:19:40.695695 kubelet[2581]: E0117 12:19:40.695646 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:40.941473 kubelet[2581]: I0117 12:19:40.941437 2581 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:19:40.943540 kubelet[2581]: I0117 12:19:40.943512 2581 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:19:41.156901 kubelet[2581]: I0117 12:19:41.156749 2581 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-7xb6d" podStartSLOduration=17.256932983 podStartE2EDuration="22.156704214s" podCreationTimestamp="2025-01-17 12:19:19 +0000 UTC" firstStartedPulling="2025-01-17 12:19:31.784170316 +0000 UTC m=+28.926652211" lastFinishedPulling="2025-01-17 12:19:36.683941548 +0000 UTC m=+33.826423442" observedRunningTime="2025-01-17 12:19:37.155007956 +0000 UTC m=+34.297489871" watchObservedRunningTime="2025-01-17 12:19:41.156704214 +0000 UTC m=+38.299186301" Jan 17 12:19:41.696509 kubelet[2581]: E0117 12:19:41.696431 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:41.903754 kubelet[2581]: I0117 12:19:41.903717 2581 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-r24vf" podStartSLOduration=31.079337276 podStartE2EDuration="38.903641887s" podCreationTimestamp="2025-01-17 12:19:03 +0000 UTC" firstStartedPulling="2025-01-17 12:19:32.433821528 +0000 UTC m=+29.576303435" lastFinishedPulling="2025-01-17 12:19:40.258126139 +0000 UTC m=+37.400608046" observedRunningTime="2025-01-17 12:19:41.158772913 +0000 UTC m=+38.301254828" watchObservedRunningTime="2025-01-17 12:19:41.903641887 +0000 UTC m=+39.046123801" Jan 17 12:19:41.904036 kubelet[2581]: I0117 12:19:41.903968 2581 topology_manager.go:215] "Topology Admit Handler" podUID="f109f860-1a39-4441-a033-afee056d4115" podNamespace="default" podName="nfs-server-provisioner-0" Jan 17 12:19:41.926622 kubelet[2581]: I0117 12:19:41.926527 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f109f860-1a39-4441-a033-afee056d4115-data\") pod \"nfs-server-provisioner-0\" (UID: \"f109f860-1a39-4441-a033-afee056d4115\") " pod="default/nfs-server-provisioner-0" Jan 17 12:19:41.926622 kubelet[2581]: I0117 12:19:41.926632 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz559\" (UniqueName: \"kubernetes.io/projected/f109f860-1a39-4441-a033-afee056d4115-kube-api-access-mz559\") pod \"nfs-server-provisioner-0\" (UID: \"f109f860-1a39-4441-a033-afee056d4115\") " pod="default/nfs-server-provisioner-0" Jan 17 12:19:42.216258 containerd[2088]: time="2025-01-17T12:19:42.216209225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f109f860-1a39-4441-a033-afee056d4115,Namespace:default,Attempt:0,}" Jan 17 12:19:42.501377 systemd-networkd[1647]: cali60e51b789ff: Link UP Jan 17 12:19:42.504594 systemd-networkd[1647]: cali60e51b789ff: Gained carrier Jan 17 12:19:42.507443 (udev-worker)[3966]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.348 [INFO][3946] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.125-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default f109f860-1a39-4441-a033-afee056d4115 1091 0 2025-01-17 12:19:41 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.29.125 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.125-k8s-nfs--server--provisioner--0-" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.348 [INFO][3946] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.404 [INFO][3959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" HandleID="k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Workload="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.419 [INFO][3959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" HandleID="k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Workload="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267020), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.125", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-17 12:19:42.404866556 +0000 UTC"}, Hostname:"172.31.29.125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.419 [INFO][3959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.419 [INFO][3959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.419 [INFO][3959] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.125' Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.423 [INFO][3959] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.431 [INFO][3959] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.450 [INFO][3959] ipam/ipam.go 489: Trying affinity for 192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.465 [INFO][3959] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.472 [INFO][3959] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.472 [INFO][3959] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.475 [INFO][3959] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94 Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.483 [INFO][3959] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.493 [INFO][3959] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.67/26] block=192.168.108.64/26 handle="k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.493 [INFO][3959] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.67/26] handle="k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" host="172.31.29.125" Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.494 [INFO][3959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:42.528546 containerd[2088]: 2025-01-17 12:19:42.494 [INFO][3959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.67/26] IPv6=[] ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" HandleID="k8s-pod-network.0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Workload="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:19:42.529758 containerd[2088]: 2025-01-17 12:19:42.496 [INFO][3946] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f109f860-1a39-4441-a033-afee056d4115", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.108.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:42.529758 containerd[2088]: 2025-01-17 12:19:42.496 [INFO][3946] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.67/32] ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:19:42.529758 containerd[2088]: 2025-01-17 12:19:42.496 [INFO][3946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:19:42.529758 containerd[2088]: 2025-01-17 12:19:42.504 [INFO][3946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:19:42.530205 containerd[2088]: 2025-01-17 12:19:42.508 [INFO][3946] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f109f860-1a39-4441-a033-afee056d4115", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.108.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"72:16:14:63:f0:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:42.530205 containerd[2088]: 2025-01-17 12:19:42.526 [INFO][3946] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.29.125-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:19:42.569650 containerd[2088]: time="2025-01-17T12:19:42.569502419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:42.569928 containerd[2088]: time="2025-01-17T12:19:42.569749817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:42.569928 containerd[2088]: time="2025-01-17T12:19:42.569796708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:42.570537 containerd[2088]: time="2025-01-17T12:19:42.570485760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:42.680263 containerd[2088]: time="2025-01-17T12:19:42.680217193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f109f860-1a39-4441-a033-afee056d4115,Namespace:default,Attempt:0,} returns sandbox id \"0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94\"" Jan 17 12:19:42.682829 containerd[2088]: time="2025-01-17T12:19:42.682703794Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 12:19:42.697385 kubelet[2581]: E0117 12:19:42.697343 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:43.664594 kubelet[2581]: E0117 12:19:43.664540 2581 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:43.704350 kubelet[2581]: E0117 12:19:43.704305 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:43.922903 systemd-networkd[1647]: cali60e51b789ff: Gained IPv6LL Jan 17 12:19:44.712365 kubelet[2581]: E0117 12:19:44.705493 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:45.712623 kubelet[2581]: E0117 12:19:45.712466 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:45.888659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251440235.mount: Deactivated successfully. Jan 17 12:19:46.332408 ntpd[2039]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:19:46.332983 ntpd[2039]: 17 Jan 12:19:46 ntpd[2039]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:19:46.713950 kubelet[2581]: E0117 12:19:46.713801 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:47.715990 kubelet[2581]: E0117 12:19:47.715934 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:48.717978 kubelet[2581]: E0117 12:19:48.717918 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:48.795321 containerd[2088]: time="2025-01-17T12:19:48.795271183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:48.796785 containerd[2088]: time="2025-01-17T12:19:48.796726236Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 12:19:48.798435 containerd[2088]: time="2025-01-17T12:19:48.798100415Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:48.808931 containerd[2088]: time="2025-01-17T12:19:48.808880896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:48.812364 containerd[2088]: time="2025-01-17T12:19:48.812308646Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.129533883s" Jan 17 12:19:48.812538 containerd[2088]: time="2025-01-17T12:19:48.812518000Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 12:19:48.827253 containerd[2088]: time="2025-01-17T12:19:48.827195512Z" level=info msg="CreateContainer within sandbox \"0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 12:19:48.849595 containerd[2088]: time="2025-01-17T12:19:48.849527929Z" level=info msg="CreateContainer within sandbox \"0ecf5b532a94f0e3a2af1ae7a92a6f2b8a0b4d108d3e5a8d6457492776b25d94\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bad0865aabc77e5c639e7a844643f2b063fd5798c6c342a8c9f918e7575980f2\"" Jan 17 12:19:48.852250 containerd[2088]: time="2025-01-17T12:19:48.851231572Z" level=info msg="StartContainer for \"bad0865aabc77e5c639e7a844643f2b063fd5798c6c342a8c9f918e7575980f2\"" Jan 17 12:19:48.925947 containerd[2088]: time="2025-01-17T12:19:48.925884789Z" level=info msg="StartContainer for \"bad0865aabc77e5c639e7a844643f2b063fd5798c6c342a8c9f918e7575980f2\" returns successfully" Jan 17 12:19:49.252827 kubelet[2581]: I0117 12:19:49.252777 2581 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.119037945 podStartE2EDuration="8.252725416s" podCreationTimestamp="2025-01-17 12:19:41 +0000 UTC" firstStartedPulling="2025-01-17 12:19:42.682058738 +0000 UTC m=+39.824540635" lastFinishedPulling="2025-01-17 12:19:48.815746206 +0000 UTC m=+45.958228106" observedRunningTime="2025-01-17 12:19:49.252687815 +0000 UTC m=+46.395169731" watchObservedRunningTime="2025-01-17 12:19:49.252725416 +0000 UTC m=+46.395207330" Jan 17 12:19:49.718499 kubelet[2581]: E0117 12:19:49.718442 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:50.719731 kubelet[2581]: E0117 12:19:50.719650 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:51.720607 kubelet[2581]: E0117 12:19:51.720553 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:52.581203 systemd[1]: run-containerd-runc-k8s.io-104850d0e0cff1175167151e595eb35f071bd25f26e3c3c9cb21789c8d771132-runc.Sypj4G.mount: Deactivated successfully. Jan 17 12:19:52.721404 kubelet[2581]: E0117 12:19:52.721365 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:53.722072 kubelet[2581]: E0117 12:19:53.722017 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:54.722450 kubelet[2581]: E0117 12:19:54.722376 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:55.722784 kubelet[2581]: E0117 12:19:55.722731 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:56.723871 kubelet[2581]: E0117 12:19:56.723811 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:57.724161 kubelet[2581]: E0117 12:19:57.724108 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:58.725300 kubelet[2581]: E0117 12:19:58.725246 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:59.726132 kubelet[2581]: E0117 12:19:59.726078 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:00.728580 kubelet[2581]: E0117 12:20:00.728512 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:01.729292 kubelet[2581]: E0117 12:20:01.729240 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:02.729826 kubelet[2581]: E0117 12:20:02.729771 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:03.664391 kubelet[2581]: E0117 12:20:03.664336 2581 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:03.734164 kubelet[2581]: E0117 12:20:03.733942 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:03.774334 containerd[2088]: time="2025-01-17T12:20:03.774285440Z" level=info msg="StopPodSandbox for \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\"" Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:03.872 [WARNING][4167] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-csi--node--driver--r24vf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"595a065e-7a20-4c97-aef2-aaf600d8f6c1", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328", Pod:"csi-node-driver-r24vf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaff0e398840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:03.872 [INFO][4167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:03.872 [INFO][4167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" iface="eth0" netns="" Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:03.872 [INFO][4167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:03.872 [INFO][4167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:03.985 [INFO][4174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:03.985 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:03.985 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:04.008 [WARNING][4174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:04.008 [INFO][4174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:04.020 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:04.028949 containerd[2088]: 2025-01-17 12:20:04.023 [INFO][4167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:20:04.028949 containerd[2088]: time="2025-01-17T12:20:04.028765854Z" level=info msg="TearDown network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\" successfully" Jan 17 12:20:04.028949 containerd[2088]: time="2025-01-17T12:20:04.028805277Z" level=info msg="StopPodSandbox for \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\" returns successfully" Jan 17 12:20:04.091965 containerd[2088]: time="2025-01-17T12:20:04.091911794Z" level=info msg="RemovePodSandbox for \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\"" Jan 17 12:20:04.091965 containerd[2088]: time="2025-01-17T12:20:04.091967778Z" level=info msg="Forcibly stopping sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\"" Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.141 [WARNING][4194] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-csi--node--driver--r24vf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"595a065e-7a20-4c97-aef2-aaf600d8f6c1", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"88e8abf06ddf3daa50856c363889353e8b3fcf9a96beaf7e8369eb2f4ab79328", Pod:"csi-node-driver-r24vf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaff0e398840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.141 [INFO][4194] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.141 [INFO][4194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" iface="eth0" netns="" Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.141 [INFO][4194] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.141 [INFO][4194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.176 [INFO][4200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.176 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.176 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.193 [WARNING][4200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.193 [INFO][4200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" HandleID="k8s-pod-network.fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Workload="172.31.29.125-k8s-csi--node--driver--r24vf-eth0" Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.206 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:04.211748 containerd[2088]: 2025-01-17 12:20:04.209 [INFO][4194] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51" Jan 17 12:20:04.216812 containerd[2088]: time="2025-01-17T12:20:04.211798208Z" level=info msg="TearDown network for sandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\" successfully" Jan 17 12:20:04.232771 containerd[2088]: time="2025-01-17T12:20:04.232711641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:04.233095 containerd[2088]: time="2025-01-17T12:20:04.232802164Z" level=info msg="RemovePodSandbox \"fee2fdacb1d8f7ceab2c538379d8703e0b2eb466e87a6b0028074023d6f2aa51\" returns successfully" Jan 17 12:20:04.233418 containerd[2088]: time="2025-01-17T12:20:04.233387178Z" level=info msg="StopPodSandbox for \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\"" Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.299 [WARNING][4218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"b2ada490-3902-4cde-9e69-9ef4430ae355", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634", Pod:"nginx-deployment-6d5f899847-7xb6d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali23c81d6a3b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.299 [INFO][4218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.299 [INFO][4218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" iface="eth0" netns="" Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.300 [INFO][4218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.300 [INFO][4218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.334 [INFO][4224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.335 [INFO][4224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.335 [INFO][4224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.345 [WARNING][4224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.345 [INFO][4224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.355 [INFO][4224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:04.358835 containerd[2088]: 2025-01-17 12:20:04.357 [INFO][4218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:20:04.358835 containerd[2088]: time="2025-01-17T12:20:04.358651114Z" level=info msg="TearDown network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\" successfully" Jan 17 12:20:04.358835 containerd[2088]: time="2025-01-17T12:20:04.358687292Z" level=info msg="StopPodSandbox for \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\" returns successfully" Jan 17 12:20:04.362507 containerd[2088]: time="2025-01-17T12:20:04.359261085Z" level=info msg="RemovePodSandbox for \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\"" Jan 17 12:20:04.362507 containerd[2088]: time="2025-01-17T12:20:04.359300054Z" level=info msg="Forcibly stopping sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\"" Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.409 [WARNING][4242] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"b2ada490-3902-4cde-9e69-9ef4430ae355", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"6af96fa46fff0837c159d922f8586ad55d7ece6acb5dae0d416cbf3a0338f634", Pod:"nginx-deployment-6d5f899847-7xb6d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali23c81d6a3b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.410 [INFO][4242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.410 [INFO][4242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" iface="eth0" netns="" Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.410 [INFO][4242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.410 [INFO][4242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.439 [INFO][4248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.439 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.439 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.458 [WARNING][4248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.459 [INFO][4248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" HandleID="k8s-pod-network.3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Workload="172.31.29.125-k8s-nginx--deployment--6d5f899847--7xb6d-eth0" Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.461 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:04.464002 containerd[2088]: 2025-01-17 12:20:04.462 [INFO][4242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33" Jan 17 12:20:04.466120 containerd[2088]: time="2025-01-17T12:20:04.464037878Z" level=info msg="TearDown network for sandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\" successfully" Jan 17 12:20:04.470866 containerd[2088]: time="2025-01-17T12:20:04.470771874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:04.470866 containerd[2088]: time="2025-01-17T12:20:04.470847642Z" level=info msg="RemovePodSandbox \"3b5c6cbf665c9d5136ba11cbf6f994b4dcb400781cdd06021b12eb37252aba33\" returns successfully" Jan 17 12:20:04.734944 kubelet[2581]: E0117 12:20:04.734896 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:05.735607 kubelet[2581]: E0117 12:20:05.735539 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:06.736131 kubelet[2581]: E0117 12:20:06.736076 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:07.737262 kubelet[2581]: E0117 12:20:07.737209 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:08.737561 kubelet[2581]: E0117 12:20:08.737506 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:09.738507 kubelet[2581]: E0117 12:20:09.738435 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:10.739397 kubelet[2581]: E0117 12:20:10.739329 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:11.740451 kubelet[2581]: E0117 12:20:11.740399 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:12.740626 kubelet[2581]: E0117 12:20:12.740558 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:13.616864 kubelet[2581]: I0117 12:20:13.616823 2581 topology_manager.go:215] "Topology Admit Handler" podUID="dff0e852-01da-4606-9b4a-a2844a301691" podNamespace="default" podName="test-pod-1" Jan 17 12:20:13.741667 kubelet[2581]: E0117 12:20:13.741618 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:13.777939 kubelet[2581]: I0117 12:20:13.777889 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwjx\" (UniqueName: \"kubernetes.io/projected/dff0e852-01da-4606-9b4a-a2844a301691-kube-api-access-7jwjx\") pod \"test-pod-1\" (UID: \"dff0e852-01da-4606-9b4a-a2844a301691\") " pod="default/test-pod-1" Jan 17 12:20:13.777939 kubelet[2581]: I0117 12:20:13.777947 2581 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ddeeb73-d4a5-4cf7-a71c-1f7a102abb8a\" (UniqueName: \"kubernetes.io/nfs/dff0e852-01da-4606-9b4a-a2844a301691-pvc-0ddeeb73-d4a5-4cf7-a71c-1f7a102abb8a\") pod \"test-pod-1\" (UID: \"dff0e852-01da-4606-9b4a-a2844a301691\") " pod="default/test-pod-1" Jan 17 12:20:13.961407 kernel: FS-Cache: Loaded Jan 17 12:20:14.079875 kernel: RPC: Registered named UNIX socket transport module. Jan 17 12:20:14.079975 kernel: RPC: Registered udp transport module. Jan 17 12:20:14.080024 kernel: RPC: Registered tcp transport module. Jan 17 12:20:14.081082 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 12:20:14.081184 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 12:20:14.523063 kernel: NFS: Registering the id_resolver key type Jan 17 12:20:14.523200 kernel: Key type id_resolver registered Jan 17 12:20:14.523232 kernel: Key type id_legacy registered Jan 17 12:20:14.610373 nfsidmap[4280]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 12:20:14.616750 nfsidmap[4281]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 12:20:14.742392 kubelet[2581]: E0117 12:20:14.742274 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:14.829917 containerd[2088]: time="2025-01-17T12:20:14.829782029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dff0e852-01da-4606-9b4a-a2844a301691,Namespace:default,Attempt:0,}" Jan 17 12:20:15.064289 (udev-worker)[4272]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:20:15.064536 systemd-networkd[1647]: cali5ec59c6bf6e: Link UP Jan 17 12:20:15.070502 systemd-networkd[1647]: cali5ec59c6bf6e: Gained carrier Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:14.924 [INFO][4282] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.29.125-k8s-test--pod--1-eth0 default dff0e852-01da-4606-9b4a-a2844a301691 1200 0 2025-01-17 12:19:43 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.29.125 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.125-k8s-test--pod--1-" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:14.925 [INFO][4282] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.125-k8s-test--pod--1-eth0" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:14.979 [INFO][4293] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" HandleID="k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Workload="172.31.29.125-k8s-test--pod--1-eth0" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:14.993 [INFO][4293] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" HandleID="k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Workload="172.31.29.125-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fe1c0), Attrs:map[string]string{"namespace":"default", "node":"172.31.29.125", "pod":"test-pod-1", "timestamp":"2025-01-17 12:20:14.979651426 +0000 UTC"}, Hostname:"172.31.29.125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:14.993 [INFO][4293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:14.993 [INFO][4293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:14.993 [INFO][4293] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.29.125' Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:14.997 [INFO][4293] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.017 [INFO][4293] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.026 [INFO][4293] ipam/ipam.go 489: Trying affinity for 192.168.108.64/26 host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.029 [INFO][4293] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.64/26 host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.033 [INFO][4293] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.033 [INFO][4293] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.038 [INFO][4293] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.044 [INFO][4293] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.054 [INFO][4293] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.68/26] block=192.168.108.64/26 handle="k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.054 [INFO][4293] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.68/26] handle="k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" host="172.31.29.125" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.054 [INFO][4293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.054 [INFO][4293] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.68/26] IPv6=[] ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" HandleID="k8s-pod-network.bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Workload="172.31.29.125-k8s-test--pod--1-eth0" Jan 17 12:20:15.098324 containerd[2088]: 2025-01-17 12:20:15.056 [INFO][4282] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.125-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"dff0e852-01da-4606-9b4a-a2844a301691", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:15.101623 containerd[2088]: 2025-01-17 12:20:15.056 [INFO][4282] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.68/32] ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.125-k8s-test--pod--1-eth0" Jan 17 12:20:15.101623 containerd[2088]: 2025-01-17 12:20:15.056 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.125-k8s-test--pod--1-eth0" Jan 17 12:20:15.101623 containerd[2088]: 2025-01-17 12:20:15.071 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.125-k8s-test--pod--1-eth0" Jan 17 12:20:15.101623 containerd[2088]: 2025-01-17 12:20:15.074 [INFO][4282] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.125-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.29.125-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"dff0e852-01da-4606-9b4a-a2844a301691", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.29.125", ContainerID:"bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.108.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"b6:10:8b:bf:70:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:15.101623 containerd[2088]: 2025-01-17 12:20:15.085 [INFO][4282] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.29.125-k8s-test--pod--1-eth0" Jan 17 12:20:15.147060 containerd[2088]: time="2025-01-17T12:20:15.146774402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:15.147060 containerd[2088]: time="2025-01-17T12:20:15.146855914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:15.147060 containerd[2088]: time="2025-01-17T12:20:15.146894184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:15.147934 containerd[2088]: time="2025-01-17T12:20:15.147791162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:15.240188 containerd[2088]: time="2025-01-17T12:20:15.240139510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dff0e852-01da-4606-9b4a-a2844a301691,Namespace:default,Attempt:0,} returns sandbox id \"bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b\"" Jan 17 12:20:15.245593 containerd[2088]: time="2025-01-17T12:20:15.245447625Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:20:15.582717 containerd[2088]: time="2025-01-17T12:20:15.582667743Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:15.583899 containerd[2088]: time="2025-01-17T12:20:15.583840862Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 12:20:15.593657 containerd[2088]: time="2025-01-17T12:20:15.593297346Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 347.717455ms" Jan 17 12:20:15.593657 containerd[2088]: time="2025-01-17T12:20:15.593450726Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:20:15.599186 containerd[2088]: time="2025-01-17T12:20:15.599078148Z" level=info msg="CreateContainer within sandbox \"bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 12:20:15.622664 containerd[2088]: time="2025-01-17T12:20:15.622613738Z" level=info msg="CreateContainer within sandbox \"bc07264930c3253f7ad85f1562af1a4e5770863952c8a4b898566667f513b76b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"55fcd143b68d3e4d9c39aec43284dc4d02a7ea464cc986f95a9503577657c576\"" Jan 17 12:20:15.623724 containerd[2088]: time="2025-01-17T12:20:15.623565627Z" level=info msg="StartContainer for \"55fcd143b68d3e4d9c39aec43284dc4d02a7ea464cc986f95a9503577657c576\"" Jan 17 12:20:15.693520 containerd[2088]: time="2025-01-17T12:20:15.693468698Z" level=info msg="StartContainer for \"55fcd143b68d3e4d9c39aec43284dc4d02a7ea464cc986f95a9503577657c576\" returns successfully" Jan 17 12:20:15.743139 kubelet[2581]: E0117 12:20:15.743069 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:16.322798 kubelet[2581]: I0117 12:20:16.322756 2581 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.971699429 podStartE2EDuration="33.322716566s" podCreationTimestamp="2025-01-17 12:19:43 +0000 UTC" firstStartedPulling="2025-01-17 12:20:15.244918263 +0000 UTC m=+72.387400165" lastFinishedPulling="2025-01-17 12:20:15.595935396 +0000 UTC m=+72.738417302" observedRunningTime="2025-01-17 12:20:16.32264426 +0000 UTC m=+73.465126176" watchObservedRunningTime="2025-01-17 12:20:16.322716566 +0000 UTC m=+73.465198480" Jan 17 12:20:16.370880 systemd-networkd[1647]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 12:20:16.743898 kubelet[2581]: E0117 12:20:16.743842 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:17.744746 kubelet[2581]: E0117 12:20:17.744688 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:18.746001 kubelet[2581]: E0117 12:20:18.745943 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:19.332481 ntpd[2039]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:20:19.332972 ntpd[2039]: 17 Jan 12:20:19 ntpd[2039]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:20:19.746490 kubelet[2581]: E0117 12:20:19.746433 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:20.746855 kubelet[2581]: E0117 12:20:20.746599 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:21.747851 kubelet[2581]: E0117 12:20:21.747799 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:22.588052 systemd[1]: run-containerd-runc-k8s.io-104850d0e0cff1175167151e595eb35f071bd25f26e3c3c9cb21789c8d771132-runc.hAHj75.mount: Deactivated successfully. Jan 17 12:20:22.748619 kubelet[2581]: E0117 12:20:22.748540 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:23.663996 kubelet[2581]: E0117 12:20:23.663941 2581 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:23.749579 kubelet[2581]: E0117 12:20:23.749513 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:24.750297 kubelet[2581]: E0117 12:20:24.750244 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:25.751538 kubelet[2581]: E0117 12:20:25.751454 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:26.751834 kubelet[2581]: E0117 12:20:26.751775 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:27.752989 kubelet[2581]: E0117 12:20:27.752932 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:28.754135 kubelet[2581]: E0117 12:20:28.754081 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:29.755196 kubelet[2581]: E0117 12:20:29.755134 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:30.758270 kubelet[2581]: E0117 12:20:30.757589 2581 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"